Search Results

Search found 1649 results on 66 pages for 'tim evans'.

Page 26/66 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Multiple PXE boots on the same subnet for RIS and WDS

    - by Tim
    We are looking to migrate off our existing server 2003 sp2 machine, running RIS (which I know is WDS as of Server 2003 Sp2, but to be clear..) with a bunch of legacy RiSETUP images to a Server 2008 r2 box. Because the change in architecture (x86 to x64), and a limitation of the Server 2008 upgrade path that won't allow mixed-mode WDS services to be upgraded, I am forced to look at running Server 2003 for RIS and Server 2008 R2 for WDS for Windows 7 on the same network. The problem I'm facing is how to deal with both PXE services at the same time? I'd still like the existing RIS server to be available for production use, but start working on WDS for deploying Windows 7. Is there a way to have a sort of PXE "chooser" ? Or some other mechanism to be able to select which server the client should download the boot image from? Thanks!

    Read the article

  • Print a series of PDF files

    - by Tim Coker
    Is there a way to print several PDF files at once? I have a bunch of individual files that I want to print (about 42). Printing each one is tedious. Does anyone know a way to print a whole series at once? Maybe something like a PDF reader with a "Print All" function? I ask because this isn't the first time I've run into this problem and have never been able to find a good solution...

    Read the article

  • Which e-reader for O'Reilly books?

    - by Tim
    Most of my programming books are from O'Reilly. Several of them need to be updated, and I'm thinking of going for an e-reader this time. O'Reilly sell their e-books as a multi-format bundle (PDF, EPUB and Mobipocket (Kindle)). Has anyone done a comparison of the formats as for suitability on a small e-reader screen? In Europe, the Kindle DX is not (yet?) available, so I'm not sure whether a Kindle or a Sony (or some other reader) is best suited for these books. Disregarding other advantages/disadvantages of the different readers for now, which one in your opinion gives the best experience with O'Reilly books in particular (or programming books in general)?

    Read the article

  • WSUS trying to download all updates again

    - by Tim Alexander
    The server hosting WSUS had a catastrophic failure and we have had to rebuild the system drives. Luckily the DB and content store for WSUS are on a seperate drive so were unaffected. During the rebuild process we thought it was time to update the server to 2008 R2 (from 2003 R2). Have got the server running and installed the WSUS role, detached the DB form SQL Express 2008 R2 and attached the original. Carried out the wsusutil.exe movecontent command with a -skipcopy switch pointing to the original content store. All looked good until I saw the front page stating it is trying to download files for 6,436 updates at around 344,565 MB!!!!!! Oops, I thought, something not right here. The content store I have on disk is only 75GB but I am thinking that some vital step has been missed in the restoration process. Either way is there a way to make WSUS reindex its local content store or something as I am unsure that downloading 344 gigabytes is a viable way forward! EDIT: Never rains but it pours. AM now getting a CLSID: FX {8b6499ed-0241-e032-6508-da4b1c879d7e} error could not create snap in. think a reinstall of WSUS is in order.

    Read the article

  • What Boxee apps do you recommend?

    - by Tim S. Van Haren
    I've recently repurposed an old machine that I had laying around (custom built several years ago with miscellaneous parts) as a simple media center with an Ubuntu 8.10 operating system and Boxee as its front end, and I absolutely love it. I'm particularly intrigued by the custom applications that are possible, and I wanted to start a running list of recommended apps for it. So, my question is this: what Boxee apps do you recommend, and why? One app per answer, please.

    Read the article

  • Export SSL Cert from IIS and import into GlassFish keystore

    - by Tim H
    What I need: I have an existing SSL certificate installed on IIS 6. On the same machine, I have GlassFish installed and would like to share the same certificate since they both share the same hostname, and they use different ports: IIS uses 443 and GlassFish uses 8181. Why I need it: Reuse existing SSL certs from IIS to GlassFish. I imagine that this is possible. I am able to install an SSL cert into GlassFish's keystore, and then import the same exact cert into IIS. I just want to go the other way - imagine having an SSL cert on IIS being used for months, and now I want to enable SSL on GlassFish. What I have done: Created a keystore with an alias: server.hostname.com Imported intermediate CA certs associated with the existing SSL Cert Imported the existing SSL cert with the same alias: server.hostname.com, but the keytool won’t allow this, as it is not associated: keytool error: java.lang.Exception: Public keys in reply and keystore don't match Why? Using a different alias causes the cert to not be trusted in the CA chain.

    Read the article

  • ImageMagick convert

    - by Tim
    convert is not working on the Linux server I am using $ convert exploss_stumps.jpg exploss_stumps.eps convert: missing an image filename `exploss_stumps.eps' @ convert.c/ConvertImageCommand/2838 Any idea why?

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Why do I see router and not my real IIS?

    - by Tim Tom
    I am trying to access IIS through web but unable to do so. Basically I have a router (which functions as router and modem) that is given by ISP and I have another router connected to the router given by ISP. My ISP's router can be visited through 192.168.0.1 and the router that I connected to ISP's router can be visited through 192.168.1.1 Please see my ISP's router: As you can see I have DMZ enabled for my router of 192.168.1.1 Now please see my router of 192.168.1.1: As you can see I added a virtual server for port 80 where 192.168.1.125 is my private IP. I rebooted both of my modems an tried to visit my IP from: http://www.whatsmyip.org/ and after doing so, when I type my live IP I still see my router of 192.168.0.1 instead of my IIS. What am I missing? Note: I have disabled Firewall on both of the routers. Any help would be appreciated.

    Read the article

  • Why do I see router and not my real IIS?

    - by Tim Tom
    I am trying to access IIS through web but unable to do so. Basically I have a router (which functions as router and modem) that is given by ISP and I have another router connected to the router given by ISP. My ISP's router can be visited through 192.168.0.1 and the router that I connected to ISP's router can be visited through 192.168.1.1 Please see my ISP's router: As you can see I have DMZ enabled for my router of 192.168.1.1 Now please see my router of 192.168.1.1: As you can see I added a virtual server for port 80 where 192.168.1.125 is my private IP. I rebooted both of my modems an tried to visit my IP from: http://www.whatsmyip.org/ and after doing so, when I type my live IP I still see my router of 192.168.0.1 instead of my IIS. What am I missing? Note: I have disabled Firewall on both of the routers. Any help would be appreciated.

    Read the article

  • Unique Features of bash compared to zsh

    - by Tim
    I have been a zsh user for quite some time (before that tcsh and before that csh). I am quite happy with it, but was wondering if there are any compelling features of bash that do not exist in zsh. And conversely, are there zsh features which do not exist in bash. My current feel is that bash is better: If you are familiar with it already and don't want to learn new syntax. It is going to exist on most all *nix machines by default, whereas zsh may be an extra install. Not trying to start a religious battle here, which is why I'm just looking for features which exist in only one of the shells.

    Read the article

  • Building a new PC, Installing XP, blue screen of death

    - by Tim
    I got a gigabyte barebones kit and am installing windows-XP(SP1) and the initial setup works, then it restarts and goes into the second phase of the setup. Then when installing components (I think that's what it says) it gets half way done and comes up with a blue screen saying IRQL_NOT_LESS_OR_EQUAL. BUT! I had gotten past that by installing windows-XP media center addition Now I am trying to install the drivers for my Asus ATI Radeon 5770 graphics card and I get another blue screen of death that doesnt give much info something about address x0000005. Would you think there is something wrong with something in my system or do you think if I got windows 7 that would take care of things? Sorry for probably not giving enough info. Here is what I have MotherBoard - Gigabyte S-series GA-H55M-S2(v) PSU - Ultra 500 watt atx HDD - Sata serial ATA Seagate Baracudda 7200 CPU - Intel i3 Memory - 4gig crucial Graphics Card - Asus ATI Radeon 5770 1Gig DDR5

    Read the article

  • Can I use webex to show app from a remote desktop connection?

    - by tim
    I am working on a demo for a potential client and they want to run a webex meeting. Right now we are testing with the following scenario. Machine A is in a lab and is running our app Machine B is elsewhere (a laptop) and we use remote desktop to connect to machine A and run software on it. If we have a webex on machine B with other people can we show the apps from machine A to the webex participants?

    Read the article

  • Homegroup should be working, but doesn't

    - by Tim
    I have Win7 installed on both my PC and laptop. When I choose to make a homegroup I can go through the steps of creating, getting password, then joining it from the other computer and it says that it all connects properly. But when I go to the homegroup tab it always says no other computers connected. If I look in the settings it will say "connected to suchandsuch homegroup" but the comps won't show. Also, on my PC, when I tick the boxes in the homegroup settings on what libraries I want to share, then click on save settings, it shuts down the settings window and when I re-open it the library tick boxes are all unticked again. Yet, I have had no problems with the tick boxes stayin ticked on the laptop. I have tried cancelling and remaking the homegroup, have tried making it on both computers, and have tried disabling and re-enabling the network connectors but it still won't work. At my old house we had 3 PCs running win 7 and 2 of them could homegroup together fine but mine never could as it was getting the same problem I am getting now. I feel like I am the only one on the planet with this problem. Can anybody help?

    Read the article

  • How to subscribe to a youtube feed from linux command line?

    - by Tim
    I want to subscribe to a youtube channel and automatically download new videos to my linux machine. I know I could do this e.g. with miro, but I will not watch the videos using Miro, want to choose the quality and would like to run it as a cronjob. It should be able to: know which feed entries are new and not download old entries resume (or at least redownload) failed/incomplete downloads from older sessions Are there any complete solutions for this? If not it would be enough for me (maybe even preferable) to just have a command line rss reader that remembers which entries have already been there and writes the new video urls (e.g. http://www.youtube.com/watch?v=FodYFMaI4vQ&feature=youtube_gdata from http://gdata.youtube.com/feeds/api/users/tedxtalks/uploads) into a file. I could then accomplish the rest using a bash script and youtube-dl. What would be programs usable for this purpose?

    Read the article

  • seeking to upgrade my bash magic. help decipher this command: bash -s stable

    - by tim
    ok so i'm working through a tutorial to get rvm installed on my mac. the bash command to get rvm via curl is curl -L https://get.rvm.io | bash -s stable i understand the first half's curl command at location rvm.io, and that the result is piped to the subsequent bash command, but i'm not sure what that command is doing. My questions: -s : im always confused about how to refer to these. what type of thing is this: a command line argument? a switch? something else? -s : what is it doing? i have googled for about half an hour but not sure how to refer to it makes it difficult. stable : what is this? tl;dr : help me decipher the command bash -s stable to those answering this post, i aspire to one day be as bash literate as you. until then, opstards such as myself thank you for the help!

    Read the article

  • Neighbour table overflow on Linux hosts related to bridging and ipv6

    - by tim
    Note: I already have a workaround for this problem (as described below) so this is only a "want-to-know" question. I have a productive setup with around 50 hosts including blades running xen 4 and equallogics providing iscsi. All xen dom0s are almost plain Debian 5. The setup includes several bridges on every dom0 to support xen bridged networking. In total there are between 5 and 12 bridges on each dom0 servicing one vlan each. None of the hosts has routing enabled. At one point in time we moved one of the machines to a new hardware including a raid controller and so we installed an upstream 3.0.22/x86_64 kernel with xen patches. All other machines run debian xen-dom0-kernel. Since then we noticed on all hosts in the setup the following errors every ~2 minutes: [55888.881994] __ratelimit: 908 callbacks suppressed [55888.882221] Neighbour table overflow. [55888.882476] Neighbour table overflow. [55888.882732] Neighbour table overflow. [55888.883050] Neighbour table overflow. [55888.883307] Neighbour table overflow. [55888.883562] Neighbour table overflow. [55888.883859] Neighbour table overflow. [55888.884118] Neighbour table overflow. [55888.884373] Neighbour table overflow. [55888.884666] Neighbour table overflow. The arp table (arp -n) never showed more than around 20 entries on every machine. We tried the obvious tweaks and raised the /proc/sys/net/ipv4/neigh/default/gc_thresh* values. FInally to 16384 entries but no effect. Not even the interval of ~2 minutes changed which lead me to the conclusion that this is totally unrelated. tcpdump showed no uncommon ipv4 traffic on any interface. The only interesting finding from tcpdump were ipv6 packets bursting in like: 14:33:13.137668 IP6 fe80::216:3eff:fe1d:9d01 > ff02::1:ff1d:9d01: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:9d01, length 24 14:33:13.138061 IP6 fe80::216:3eff:fe1d:a8c1 > ff02::1:ff1d:a8c1: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:a8c1, length 24 14:33:13.138619 IP6 fe80::216:3eff:fe1d:bf81 > ff02::1:ff1d:bf81: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:bf81, length 24 14:33:13.138974 IP6 fe80::216:3eff:fe1d:eb41 > ff02::1:ff1d:eb41: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:eb41, length 24 which placed the idea in my mind that the problem maybe related to ipv6, since we have no ipv6 services in this setup. The only other hint was the coincidence of the host upgrade with the beginning of the problems. I powered down the host in question and the errors were gone. Then I subsequently took down the bridges on the host and when i took down (ifconfig down) one particularly bridge: br-vlan2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5286 (5.1 KiB) TX bytes:726 (726.0 B) eth0.2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1801 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126228 (123.2 KiB) TX bytes:1464 (1.4 KiB) bridge name bridge id STP enabled interfaces ... br-vlan2158 8000.0026b9fb162c no eth0.2158 br-vlan2159 8000.0026b9fb162c no eth0.2159 The errors went away again. As you can see the bridge holds no ipv4 address and it's only member is eth0.2159 so no traffic should cross it. Bridge and interface .2159 / .2157 / .2158 which are in all aspects identical apart from the vlan they are connected to had no effect when taken down. Now I disabled ipv6 on the entire host via sysctl net.ipv6.conf.all.disable_ipv6 and rebooted. After this even with bridge br-vlan2159 enabled no errors occur. Any ideas are welcome.

    Read the article

  • Permission denied install Joomla CiviCRM

    - by Tim
    Dear All, I am trying to install CiviCRM on a Joomla 1.5.17 web server running Ubuntu 9.10. Uploading the package to the tmp directory in /var/www/[site name]/tmp and installing creates this error: Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: include_once(/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php) [function.include-once]: failed to open stream: Permission denied in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: include_once() [function.include]: Failed opening '/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php' for inclusion (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: require_once(DB.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Fatal error: require_once() [function.require]: Failed opening required 'DB.php' (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Initially I got a permissions denied error and thought that Joomla did not have permissions to all its directories but looking at Help-System information all the necessary directories are writable. I then decided to chmod 777 all the directories and try again but it still fails. Looking at the directories afterwards it seems that the new directories being created are not being created 777. By changing them I can get at least one step further before the error appears again. My question is does anyone know how to get round this? I am thinking that the new directories being created will require sudo privileges to have mv and create actions carried out, hence the permission denied errors. Can this be configured in Joomla? Or is there a way to specify that new directories created in /var/www/[site name] take 777 by default? any help greatly appreciated! EDIT: P.S. if anyone could give me a clue as to how the insert code feature works as well that would be great! Might make this post a bit more readable! EDIT2: Well I have had a bash at changing the permissions and ownership. sudo chown -R www-data:www-data /var/www/trbcp I then tried changing the whole /var directory (insecure I know but this is a test and dev server for me to find my feet on) to 777 and still getting permission errors. It seems to be error opening stream? Not a php guy so not sure what that is but could it be that permissions to run php script need to change? any thoughts greatly appreciated.

    Read the article

  • How to fix massive lag on ZyXEL HomePlug AV powerline adapters?

    - by Tim Abell
    I have 3 ZyXEL Homeplug AV powerline adapters as per the one in the review below. I have two plugged in currently, one into my Be / Thompson wireless router, and one into my desktop pc (box1). every now and then the link indicator on the adapters (the mains link, not the ethernet link) goes nutty, and performance falls off a cliff (see below). http://www.gadgetspeak.com/gadget/article.rhtm/753/479266/ZyXEL_PowerLine_HomePlug_AV_PLA401.html 64 bytes from box1 (192.168.1.101): icmp_seq=1064 ttl=64 time=996 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1065 ttl=64 time=549 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1066 ttl=64 time=6.15 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1067 ttl=64 time=1400 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1068 ttl=64 time=812 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1069 ttl=64 time=11.1 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1070 ttl=64 time=1185 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1071 ttl=64 time=501 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1072 ttl=64 time=1975 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1073 ttl=64 time=970 ms ^C --- box1 ping statistics --- 1074 packets transmitted, 394 received, +487 errors, 63% packet loss, time 1082497ms rtt min/avg/max/mdev = 5.945/598.452/3526.454/639.768 ms, pipe 4 Any idea how to diagnose/fix? I'm on linux so installing the windoze software that came with them is not something I'm terribly keen to do.

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Finding source of leaking active memory on Mac OS Lion

    - by Tim Kemp
    My activity monitor shows 6GB of active RAM usage: Yet my Real Memory column shows nothing like that amount: (There's another screenful below that, all smaller.) Backing that up, the output from this command (which sums up memory usage of every running process): ps -axm -o "rss,comm" | awk 'BEGIN { s=0;}; {s=s+$1;}; END { printf("%.2f GB\n", (s/1024.0/1024));}' Gives 4.09GB, so it looks to me like 2GB has leaked. I see much wider ranges sometimes, perhaps 2 or 3GB from the ps command and as much as 7 or 8GB of Active usage reported by Activity Monitor. I've tried quitting everything and logging my user out and back in again, but the Active usage is still far higher than the RAM reported by ps and by each process to Activity Monitor. This 2GB of active RAM is basically unrecoverable unless I reboot. Is there any way to a) detect what's leaking and b) get it back? Thanks

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >