Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 661/739 | < Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >

  • Cisco SA520 to Adtran 1234 no DHCP transfer

    - by Grico
    I am trying to set up a Cisco SA520 to run DHCP on my network. I have a vendor provided switch, the Adtran 1234, and it provides DHCP for our phone systems on VLAN 200. I do not have access to the Adtran, but the vendor gave me a IP on port 1 for WAN and said port 2 should be for the "trust" side should go. I did setup a mini lab where, Adtran 1 went to SA520 WAN port, and SA520 trust 1 went to my laptop. Everything worked fine, I could ping and get internet using the DHCP scope I put on the SA520. I then unplugged my computer from SA520 trust 1 and plugged it into Adtran 2. I plugged my computer into Adtran 23 and I dont get DHCP or even a link light. If I restart my machine, I get a brief link and then it dies once the machine boots. I have tried several ports on the Adtran and none seem to work. Different cables as well. However, when I plug a phone into the Adtran, the phone boot immediately and shows link. Thoughts?

    Read the article

  • Apache ProxyPassReverse and https

    - by joshuaball
    Hi, I would like to map all traffic on 80 and 443 from foo.com to an internal server: 192.168.1.101. I have a VirtualHost (Apache 2.2 on Ubuntu) setup as follows (note, I had to break up the hyperlinks below because I am a 'new user'): <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.101/ ProxyPassReverse / http://192.168.1.101/ </VirtualHost> And that works great for http traffic. However, I can't seem to do the same thing for https. I have tried: Changing VirtualHost *:80 to * - but that doesn't work (I need it http-http and https-https) Creating a new VirtualHost entry for *:443 that redirects to http://192.168.1.101/, but that fails as well (browser timeouts) I did some searching, here and elsewhere, and the closest question I could find was this, but that didn't quite answer it. Also, just out of curiosity, I tried mapping all ports to https (by changing the two ProxyPass lines from http to https (and removing the :80 from VH), and that didn't work either. How would you do that as well? Any thoughts? Thanks in advance.

    Read the article

  • Get details / solve issue with a kernel panic?

    - by Joseph
    I have a Lenovo T430 running Linux Mint 13 (MATE): joseph:~$ uname -a Linux joseph-T430-LM 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I installed Mint immediately after getting the laptop about two weeks ago, and have noticed that about once a day, the computer will completely freeze up- I can't use Ctrl+Alt+Backspace to restart X, I can't use Ctrl+Alt+F1 to get a text only terminal, can't move mouse, can't type, and if any music was playing it just gets stuck in about a 1-second loop. There is a Windows partition, but I haven't had any issues in Windows. I couldn't find a common thread between the freezes, they were seemingly random (sometimes right after I clicked the mouse, sometimes not; sometimes with Pandora/flash being used, sometimes not, etc). I assume they're kernel panics since it completely locks up, but the laptop doesn't have a capslock or scroll lock LED. It is on a dock and I do have a USB keyboard, but the scroll lock/capslock lights do not flash when it happens (not sure if this is indicating its not a kernel panic, or if the kernel panic just wouldn't illuminate the LEDs on a usb keyboard attached to a laptop dock). This was annoying but not terrible. However, I've found a way to reproduce it. I have a particular CSV file that when I open up in LibreOffice Calc and scroll around, the same thing happens- complete lock up. I really need to use this file, so I'd like to fix the issue, but at the least it's given me a test case to work with. So, having a case where I can cause this issue, what can I do to better find out what's going on? I've looked in /var/log/syslog but haven't found anything seemingly useful. Any thoughts?

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • netconfig won't change DNS on opensuse 12.2

    - by Krystian
    I'm trying to update my dns servers after openvpn connection, but netconfig won't do that for me. Here's how I'm trying to do it [manually now]: /sbin/netconfig modify -v -i tap0 -s openvpn <<-EOF INTERFACE='tap0' DNSSERVERS='10.10.0.1' EOF And here's the verbose output: debug: lockfile created (/var/run/netconfig.pid) for PID 5530 debug: lockfile created debug: write new STATE file /var/run/netconfig//tap0/netconfig0 debug: Module order: dns-resolver dns-bind dns-dnsmasq nis ntp-runtime debug: dns-resolver module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_dns_settings: /var/run/netconfig/NetworkManager.netconfig debug: get_dns_settings: service 'NetworkManager' => rank '1' debug: get_dns_settings: DNS_SEARCHLIST_1='mydomain.com' debug: get_dns_settings: DNS_SERVERS_1='192.168.0.1' debug: exit get_dns_settings: /var/run/netconfig/NetworkManager.netconfig debug: write_resolv_conf: ' mydomain.com ' ' 192.168.0.1 ' debug: No changes for /etc/resolv.conf debug: dns-bind Module called debug: dns-dnsmasq Module called debug: nis Module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_nis_settings: /var/run/netconfig/NetworkManager.netconfig debug: exit get_nis_settings: /var/run/netconfig/NetworkManager.netconfig debug: set_nisdomainname: eth0 24 debug: set_nisdomainname: => yes debug: set_nisdomainname: old[]=, new[24]= debug: format_yp_conf called with : debug: Using static fallback debug: format_static[0] called debug: No changes for /etc/yp.conf debug: nis domainname '' is up to date debug: ntp-runtime Module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_ntp_settings: /var/run/netconfig/NetworkManager.netconfig debug: get_ntp_settings: NTP_SERVER_LIST='' debug: exit get_ntp_settings: /var/run/netconfig/NetworkManager.netconfig I've been trying to find something relevant on the web, but failed to do so. I have no other clue on how to progress with this issue. Any thoughts?

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Cannot Delete Item "Could Not Find This Item" issue

    - by aronchick
    A friend sent a long a file (a .rar) he wanted me to check out for him before he installed it. I downloaded it and unrared it with no problems, but it was full of .exe's instead of the intended contents (fonts) so I advised him to delete it immediately and not use. I then proceeded to do the same, but the folder simply will not delete. Oddly the files went fine, and I never ran anything, but this is what I'm seeing: Could not find this item This is no longer located in C:\Users\This_User\Desktop. verify the item's location and try again. I've tried the following things with no help: Using "Unlocker" to Unlock and delete Using move on reboot and rebooting Using PendMoves (from sysinternals) and rebooting Elevating a cmd line, doing a dir /x to get the short name of the folder, and then del 'shortna~1' Moving the folder to a new folder and then trying to delete the parent folder I'm on Windows 7 RTM, very fresh install. Any thoughts? Update: Just to confirm, I've run Hijack this and half a dozen other malware detectors, and everything came back clean (no extra processes, no other obvious badness). Rebooting in safe mode didn't help either.

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Booting VM Directly from iSCSI (or FC) in Hyper-V R2

    - by tony roth
    I have a server that san boots that I want to p2v. I have many options disk2vhd, scvmm etc but I was thinking about cloning the lun (flexclone, netapp) presenting it to my hyper-v r2 server. Within the hv manager do a create new disk then have it copy the cloned lun to a vhd file. Then do the bcdedit\bootsect stuff to it. Should work right? I'm also curious if anybodys booting vhd's that are on bootable luns? I've booted native vhd's just fine was just curious about the running them off a bootable lun. I think that this has quite a few advantages like instant p2v etc.. any thoughts on this? hmm dang as I was typing this I realized that I should not use the hv manager new disk copy routine, I should just disk2vhd the mounted lun. This has advantages in that it should be a lot faster!! discovered that disk2vhd may be flaky, crashed the first time I ran it! thanks

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • Make router forward HTTP and HTTPS traffic to external App

    - by cOsticla
    I use a Linksys WRT54GL router with DD-WRT v24-sp2 (10/10/09) std (SVN revision 13064) which I am trying to make forward all HTTP and HTTPS traffic to an external app called Fiddler (used as proxy) on port 8888. After a lot of digging on this site, dd-wrt forum, dd-wrt.com and WWW, I am stacked with the following piece of code that works (thanks to the guys from dd-wrt support for this info), but only for forwarding HTTP traffic (port 80): #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I tried to edit the code from above and I came up with the following but it's still not forwarding HTTPS but just HTTP traffic: #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp -m multiport --dports 80,443 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I am not sure if is possible to forward HTTPS traffic anymore by just using a router so I'd appreciate if somebody will share his thoughts and/or examples regarding this subject here. Thanks!

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • SharePoint solution package not deploying to all front-ends

    - by Alex
    I have a WSP that contains a web part. It's being built using WSPBuilder. Most of the time, the WSP deploys perfectly. However, in two of our test environments (and sadly, in production, too) the WSP doesn't deploy properly to all the web front ends. The assemblies make it into the GAC, and the .webpart files get provisioned. The problem is that a tool part that the web part relies on for configuration simply fails to appear. I've determined that every time this has happened, it has been isolated to a single web front end. I've been able to resolve the issue by doing an stsadm -o deploysolution to re-deploy the solution, and in one instance it was resolved by the end user deactivating/reactivating the feature. Unfortunately, though, this has made it impossible to determine if the control isn't being deployed properly, or if it's some other issue. Any thoughts on this? Could it be a problem with the WSP, or is it likely to be environmental?

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Windows AD DNS: Event ID 5504

    - by Chris_K
    Two of my AD controllers (both running DNS service) appear to be having a similar issue. Both are throwing lots of events in the DNS events that look like this: Event Type: Information Event Source: DNS Event Category: None Event ID: 5504 Date: 5/24/2010 Time: 11:51:38 AM User: N/A Computer: ALPHA Description: The DNS server encountered an invalid domain name in a packet from 76.74.137.6. The packet will be rejected. The event data contains the DNS packet. That will come with the same event, same time, with a packet from 76.74.137.7 as well. I know this is "Information" not an error, but since it is new and different it bothers me (yes, I fear unexplained change!) Both machines are running Windows 2003 R2 SP2. The DNS servers are not exposed to the internet. Both DNS servers are configured to use OpenDNS for Forwarders. For both servers, this started about a week ago. Any thoughts on: 1) should I be concerned? 2) how can I stop/fix this? To keep it interesting, I have a 3rd AD / DNS box. Same domain, different Active Directory site. Same forwarders, yet doesn't have this issue.

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

  • Mac Mail sent messages not in sent folder

    - by Elizabeth Akers
    Hi. My powerbook G4 has been lobbying for a furlough recently, and its latest prank is to not show sent messages in my sent mailbox. I have always had a copy of every sent message, and now, for some unknown reason, I can only see sent messages from May 7 and earlier. When I did a test, and sent myself an email, it shows up fine in my inbox, but there's nothing in the sent folder. Weird. It also has the little arrow next to messages that I replied to in the inbox. I guess I'm going to have to cc myself on everything until I get this fixed. Also, it's got another new trick, which I'm assuming is unrelated, but I mention it here just in case it is: the battery says it's "good" and has a cycle count of 0, and a full charge capacity of 2139, but for some reason it's all of a sudden not charging when it's plugged in. It works fine when connected to the wall, but there is no battery life at all. Zip. Any thoughts? Is it just time for a new 'book? This one just got back from Apple, having its harddrive replaced for the fourth time (for free, but seriously, this thing is a lemon). Thanks so much, Elizabeth

    Read the article

< Previous Page | 657 658 659 660 661 662 663 664 665 666 667 668  | Next Page >