Search Results

Search found 1721 results on 69 pages for 'tim jackson'.

Page 29/69 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Export SSL Cert from IIS and import into GlassFish keystore

    - by Tim H
    What I need: I have an existing SSL certificate installed on IIS 6. On the same machine, I have GlassFish installed and would like to share the same certificate since they both share the same hostname, and they use different ports: IIS uses 443 and GlassFish uses 8181. Why I need it: Reuse existing SSL certs from IIS to GlassFish. I imagine that this is possible. I am able to install an SSL cert into GlassFish's keystore, and then import the same exact cert into IIS. I just want to go the other way - imagine having an SSL cert on IIS being used for months, and now I want to enable SSL on GlassFish. What I have done: Created a keystore with an alias: server.hostname.com Imported intermediate CA certs associated with the existing SSL Cert Imported the existing SSL cert with the same alias: server.hostname.com, but the keytool won’t allow this, as it is not associated: keytool error: java.lang.Exception: Public keys in reply and keystore don't match Why? Using a different alias causes the cert to not be trusted in the CA chain.

    Read the article

  • ImageMagick convert

    - by Tim
    convert is not working on the Linux server I am using $ convert exploss_stumps.jpg exploss_stumps.eps convert: missing an image filename `exploss_stumps.eps' @ convert.c/ConvertImageCommand/2838 Any idea why?

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Why do I see router and not my real IIS?

    - by Tim Tom
    I am trying to access IIS through web but unable to do so. Basically I have a router (which functions as router and modem) that is given by ISP and I have another router connected to the router given by ISP. My ISP's router can be visited through 192.168.0.1 and the router that I connected to ISP's router can be visited through 192.168.1.1 Please see my ISP's router: As you can see I have DMZ enabled for my router of 192.168.1.1 Now please see my router of 192.168.1.1: As you can see I added a virtual server for port 80 where 192.168.1.125 is my private IP. I rebooted both of my modems an tried to visit my IP from: http://www.whatsmyip.org/ and after doing so, when I type my live IP I still see my router of 192.168.0.1 instead of my IIS. What am I missing? Note: I have disabled Firewall on both of the routers. Any help would be appreciated.

    Read the article

  • Why do I see router and not my real IIS?

    - by Tim Tom
    I am trying to access IIS through web but unable to do so. Basically I have a router (which functions as router and modem) that is given by ISP and I have another router connected to the router given by ISP. My ISP's router can be visited through 192.168.0.1 and the router that I connected to ISP's router can be visited through 192.168.1.1 Please see my ISP's router: As you can see I have DMZ enabled for my router of 192.168.1.1 Now please see my router of 192.168.1.1: As you can see I added a virtual server for port 80 where 192.168.1.125 is my private IP. I rebooted both of my modems an tried to visit my IP from: http://www.whatsmyip.org/ and after doing so, when I type my live IP I still see my router of 192.168.0.1 instead of my IIS. What am I missing? Note: I have disabled Firewall on both of the routers. Any help would be appreciated.

    Read the article

  • Unique Features of bash compared to zsh

    - by Tim
    I have been a zsh user for quite some time (before that tcsh and before that csh). I am quite happy with it, but was wondering if there are any compelling features of bash that do not exist in zsh. And conversely, are there zsh features which do not exist in bash. My current feel is that bash is better: If you are familiar with it already and don't want to learn new syntax. It is going to exist on most all *nix machines by default, whereas zsh may be an extra install. Not trying to start a religious battle here, which is why I'm just looking for features which exist in only one of the shells.

    Read the article

  • Building a new PC, Installing XP, blue screen of death

    - by Tim
    I got a gigabyte barebones kit and am installing windows-XP(SP1) and the initial setup works, then it restarts and goes into the second phase of the setup. Then when installing components (I think that's what it says) it gets half way done and comes up with a blue screen saying IRQL_NOT_LESS_OR_EQUAL. BUT! I had gotten past that by installing windows-XP media center addition Now I am trying to install the drivers for my Asus ATI Radeon 5770 graphics card and I get another blue screen of death that doesnt give much info something about address x0000005. Would you think there is something wrong with something in my system or do you think if I got windows 7 that would take care of things? Sorry for probably not giving enough info. Here is what I have MotherBoard - Gigabyte S-series GA-H55M-S2(v) PSU - Ultra 500 watt atx HDD - Sata serial ATA Seagate Baracudda 7200 CPU - Intel i3 Memory - 4gig crucial Graphics Card - Asus ATI Radeon 5770 1Gig DDR5

    Read the article

  • Can I use webex to show app from a remote desktop connection?

    - by tim
    I am working on a demo for a potential client and they want to run a webex meeting. Right now we are testing with the following scenario. Machine A is in a lab and is running our app Machine B is elsewhere (a laptop) and we use remote desktop to connect to machine A and run software on it. If we have a webex on machine B with other people can we show the apps from machine A to the webex participants?

    Read the article

  • Homegroup should be working, but doesn't

    - by Tim
    I have Win7 installed on both my PC and laptop. When I choose to make a homegroup I can go through the steps of creating, getting password, then joining it from the other computer and it says that it all connects properly. But when I go to the homegroup tab it always says no other computers connected. If I look in the settings it will say "connected to suchandsuch homegroup" but the comps won't show. Also, on my PC, when I tick the boxes in the homegroup settings on what libraries I want to share, then click on save settings, it shuts down the settings window and when I re-open it the library tick boxes are all unticked again. Yet, I have had no problems with the tick boxes stayin ticked on the laptop. I have tried cancelling and remaking the homegroup, have tried making it on both computers, and have tried disabling and re-enabling the network connectors but it still won't work. At my old house we had 3 PCs running win 7 and 2 of them could homegroup together fine but mine never could as it was getting the same problem I am getting now. I feel like I am the only one on the planet with this problem. Can anybody help?

    Read the article

  • How to subscribe to a youtube feed from linux command line?

    - by Tim
    I want to subscribe to a youtube channel and automatically download new videos to my linux machine. I know I could do this e.g. with miro, but I will not watch the videos using Miro, want to choose the quality and would like to run it as a cronjob. It should be able to: know which feed entries are new and not download old entries resume (or at least redownload) failed/incomplete downloads from older sessions Are there any complete solutions for this? If not it would be enough for me (maybe even preferable) to just have a command line rss reader that remembers which entries have already been there and writes the new video urls (e.g. http://www.youtube.com/watch?v=FodYFMaI4vQ&feature=youtube_gdata from http://gdata.youtube.com/feeds/api/users/tedxtalks/uploads) into a file. I could then accomplish the rest using a bash script and youtube-dl. What would be programs usable for this purpose?

    Read the article

  • Neighbour table overflow on Linux hosts related to bridging and ipv6

    - by tim
    Note: I already have a workaround for this problem (as described below) so this is only a "want-to-know" question. I have a productive setup with around 50 hosts including blades running xen 4 and equallogics providing iscsi. All xen dom0s are almost plain Debian 5. The setup includes several bridges on every dom0 to support xen bridged networking. In total there are between 5 and 12 bridges on each dom0 servicing one vlan each. None of the hosts has routing enabled. At one point in time we moved one of the machines to a new hardware including a raid controller and so we installed an upstream 3.0.22/x86_64 kernel with xen patches. All other machines run debian xen-dom0-kernel. Since then we noticed on all hosts in the setup the following errors every ~2 minutes: [55888.881994] __ratelimit: 908 callbacks suppressed [55888.882221] Neighbour table overflow. [55888.882476] Neighbour table overflow. [55888.882732] Neighbour table overflow. [55888.883050] Neighbour table overflow. [55888.883307] Neighbour table overflow. [55888.883562] Neighbour table overflow. [55888.883859] Neighbour table overflow. [55888.884118] Neighbour table overflow. [55888.884373] Neighbour table overflow. [55888.884666] Neighbour table overflow. The arp table (arp -n) never showed more than around 20 entries on every machine. We tried the obvious tweaks and raised the /proc/sys/net/ipv4/neigh/default/gc_thresh* values. FInally to 16384 entries but no effect. Not even the interval of ~2 minutes changed which lead me to the conclusion that this is totally unrelated. tcpdump showed no uncommon ipv4 traffic on any interface. The only interesting finding from tcpdump were ipv6 packets bursting in like: 14:33:13.137668 IP6 fe80::216:3eff:fe1d:9d01 > ff02::1:ff1d:9d01: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:9d01, length 24 14:33:13.138061 IP6 fe80::216:3eff:fe1d:a8c1 > ff02::1:ff1d:a8c1: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:a8c1, length 24 14:33:13.138619 IP6 fe80::216:3eff:fe1d:bf81 > ff02::1:ff1d:bf81: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:bf81, length 24 14:33:13.138974 IP6 fe80::216:3eff:fe1d:eb41 > ff02::1:ff1d:eb41: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:eb41, length 24 which placed the idea in my mind that the problem maybe related to ipv6, since we have no ipv6 services in this setup. The only other hint was the coincidence of the host upgrade with the beginning of the problems. I powered down the host in question and the errors were gone. Then I subsequently took down the bridges on the host and when i took down (ifconfig down) one particularly bridge: br-vlan2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5286 (5.1 KiB) TX bytes:726 (726.0 B) eth0.2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1801 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126228 (123.2 KiB) TX bytes:1464 (1.4 KiB) bridge name bridge id STP enabled interfaces ... br-vlan2158 8000.0026b9fb162c no eth0.2158 br-vlan2159 8000.0026b9fb162c no eth0.2159 The errors went away again. As you can see the bridge holds no ipv4 address and it's only member is eth0.2159 so no traffic should cross it. Bridge and interface .2159 / .2157 / .2158 which are in all aspects identical apart from the vlan they are connected to had no effect when taken down. Now I disabled ipv6 on the entire host via sysctl net.ipv6.conf.all.disable_ipv6 and rebooted. After this even with bridge br-vlan2159 enabled no errors occur. Any ideas are welcome.

    Read the article

  • seeking to upgrade my bash magic. help decipher this command: bash -s stable

    - by tim
    ok so i'm working through a tutorial to get rvm installed on my mac. the bash command to get rvm via curl is curl -L https://get.rvm.io | bash -s stable i understand the first half's curl command at location rvm.io, and that the result is piped to the subsequent bash command, but i'm not sure what that command is doing. My questions: -s : im always confused about how to refer to these. what type of thing is this: a command line argument? a switch? something else? -s : what is it doing? i have googled for about half an hour but not sure how to refer to it makes it difficult. stable : what is this? tl;dr : help me decipher the command bash -s stable to those answering this post, i aspire to one day be as bash literate as you. until then, opstards such as myself thank you for the help!

    Read the article

  • Permission denied install Joomla CiviCRM

    - by Tim
    Dear All, I am trying to install CiviCRM on a Joomla 1.5.17 web server running Ubuntu 9.10. Uploading the package to the tmp directory in /var/www/[site name]/tmp and installing creates this error: Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: include_once(/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php) [function.include-once]: failed to open stream: Permission denied in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: include_once() [function.include]: Failed opening '/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php' for inclusion (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: require_once(DB.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Fatal error: require_once() [function.require]: Failed opening required 'DB.php' (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Initially I got a permissions denied error and thought that Joomla did not have permissions to all its directories but looking at Help-System information all the necessary directories are writable. I then decided to chmod 777 all the directories and try again but it still fails. Looking at the directories afterwards it seems that the new directories being created are not being created 777. By changing them I can get at least one step further before the error appears again. My question is does anyone know how to get round this? I am thinking that the new directories being created will require sudo privileges to have mv and create actions carried out, hence the permission denied errors. Can this be configured in Joomla? Or is there a way to specify that new directories created in /var/www/[site name] take 777 by default? any help greatly appreciated! EDIT: P.S. if anyone could give me a clue as to how the insert code feature works as well that would be great! Might make this post a bit more readable! EDIT2: Well I have had a bash at changing the permissions and ownership. sudo chown -R www-data:www-data /var/www/trbcp I then tried changing the whole /var directory (insecure I know but this is a test and dev server for me to find my feet on) to 777 and still getting permission errors. It seems to be error opening stream? Not a php guy so not sure what that is but could it be that permissions to run php script need to change? any thoughts greatly appreciated.

    Read the article

  • How to fix massive lag on ZyXEL HomePlug AV powerline adapters?

    - by Tim Abell
    I have 3 ZyXEL Homeplug AV powerline adapters as per the one in the review below. I have two plugged in currently, one into my Be / Thompson wireless router, and one into my desktop pc (box1). every now and then the link indicator on the adapters (the mains link, not the ethernet link) goes nutty, and performance falls off a cliff (see below). http://www.gadgetspeak.com/gadget/article.rhtm/753/479266/ZyXEL_PowerLine_HomePlug_AV_PLA401.html 64 bytes from box1 (192.168.1.101): icmp_seq=1064 ttl=64 time=996 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1065 ttl=64 time=549 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1066 ttl=64 time=6.15 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1067 ttl=64 time=1400 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1068 ttl=64 time=812 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1069 ttl=64 time=11.1 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1070 ttl=64 time=1185 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1071 ttl=64 time=501 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1072 ttl=64 time=1975 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1073 ttl=64 time=970 ms ^C --- box1 ping statistics --- 1074 packets transmitted, 394 received, +487 errors, 63% packet loss, time 1082497ms rtt min/avg/max/mdev = 5.945/598.452/3526.454/639.768 ms, pipe 4 Any idea how to diagnose/fix? I'm on linux so installing the windoze software that came with them is not something I'm terribly keen to do.

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Finding source of leaking active memory on Mac OS Lion

    - by Tim Kemp
    My activity monitor shows 6GB of active RAM usage: Yet my Real Memory column shows nothing like that amount: (There's another screenful below that, all smaller.) Backing that up, the output from this command (which sums up memory usage of every running process): ps -axm -o "rss,comm" | awk 'BEGIN { s=0;}; {s=s+$1;}; END { printf("%.2f GB\n", (s/1024.0/1024));}' Gives 4.09GB, so it looks to me like 2GB has leaked. I see much wider ranges sometimes, perhaps 2 or 3GB from the ps command and as much as 7 or 8GB of Active usage reported by Activity Monitor. I've tried quitting everything and logging my user out and back in again, but the Active usage is still far higher than the RAM reported by ps and by each process to Activity Monitor. This 2GB of active RAM is basically unrecoverable unless I reboot. Is there any way to a) detect what's leaking and b) get it back? Thanks

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • How can I disable Control-W in Windows XP?

    - by Tim Visher
    I'm an Emacs/Mac user trapped on Windows at work and I often type Control-W from muscle memory to delete a word and thus by mistake kill the entire window, including everything that I was working on. This a particularly egregious problem for Console2 as I run GNU Screen and am often doing many things at once. Is there any way to completely disable Control-W or remap it to something that is far harder to type? Thanks!

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >