Search Results

Search found 14828 results on 594 pages for 'settings py'.

Page 492/594 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • IIS SMTP unable to relay for domain on local network.

    - by MartinHN
    Hi I have a network with the following servers: EXCH01 - Exchange Server for @domain.com e-mails. TEST-REP01 - Local reporting server. Have IIS SMTP installed and configured. What happens on the TEST-REP01, is that I have a Windows Service that reads reports in a SQL server that is ready for delivery. All reports is sent one at a time, using the local SMTP server. It works perfectly, unless the recipient e-mail address is [email protected] - the domain that the EXCH01 server manages. I get the following error: Mailbox unavailable. The server response was: 5.7.1 Unable to relay for [email protected] What can I do to troubleshoot this further? I can't seem to find useful information in the SMTP log: 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 EHLO - +TEST-REP01 250 0 210 15 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 MAIL - +FROM:<[email protected]> 250 0 44 31 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 RCPT - +TO:<[email protected]> 550 0 50 28 0 SMTP - - - - 2011-01-05 03:01:38 192.168.8.168 TEST-REP01 SMTPSVC1 TEST-REP01 192.168.8.168 0 QUIT - TEST-REP01 240 0 50 28 0 SMTP - - - - Relay access on the local IIS (TEST-REP01) are set to allow 127.0.0.1, TEST-REP01 and other servers such as TEST-WEB01. DNS settings on the local network is not domain.com - but companyname-domain.local.

    Read the article

  • How do I stop panning on a monitor that supports a specific resolution?

    - by IronicMuffin
    Hi all, I've been battling this for a few days now. Any and all help is appreciated. I have a planar monitor with a native res of 1280x1024. At one point, I had used PowerStrip to override "something" and set the res to 1600x1200, and it worked great. I then installed new intel graphics drivers for my 86895g (or w/e model) video card, which screwed up whatever settings I had. If I set it to 1600x1200 this time, it would set the res correctly, but give me a 1280x1024 viewport and the screen would pan when the mouse got to the edges of the screen. Absolutely not useful. Ok, so I was limited to 1280x1024 now. W/e. Now...enter new video card with two video ports. I have two monitors now and the latest nVidia drivers. I decide to try to get dual 1600x1200 going...ended up screwing the original monitor up so much now that it's at 1280x1024, with a 1024x768 viewport and panning! Absolutely not usable now. So what I need, and I can't seem to find on any forums, is help doing one or more of the following: Clearing out all monitor/edid info out of the windows registry without corrupting the registry. Actually correctly override the EDID values and get my sweet res back. Some other way of getting back to at least dual 1280x1024 with NO panning. Note: My device manager shows 4 monitors for some reason. My registry shows entries for all sorts of monitors that have been hooked up to the machine over the years. It's making it difficult to debug. Experience with PowerStrip would be helpful. I've been mucking with Phoenix EDID designer and MonInfo as well, but I'm stumbling around in the dark with these. Windows XP SP2 nVidia GeForce 6200 nVidia drivers: v258.96 Monitor: Planar PL 1910M Thanks!

    Read the article

  • Time Drift on VM servers, need a reliable solution

    - by zeroasterisk
    We have some windows server 2008 VMware instances on multiple physical servers (hosted) and an application which requires the time to be synced across the server instances. Obviously, VMware has problems with this and we really have never gotten it working any better, we have setup the servers to poll for an NTP update every minute which mitigates the problem (in a fairly crude way). Except that every once in a while, the update will fail (because there's already too much drift) and then windows never does an NTP update afterwards which eventually allows the servers to drift far enough apart that our application breaks, and we notice. We are thinking about changing hosts to Xen servers on approximately the same setup, and I anticipate similar problems. can anyone tell me if Xen has the same time-drift issues VMware does, for guests? can anyone tell me what the best windows server settings are for syncing with an external NTP server to keep things in sync: how frequently do you recommend syncing? (assuming every minute) do you recommend running our own NTP server - even if it has to be on a virtual instance? (assuming not) is there any way to tell windows to sync with the NTP server no matter what the time difference is? any other suggestions for keeping windows servers time in sync? I have become familiar with [ http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1318 ] and it's helped, but it's not been totally effective (see above). thanks much!

    Read the article

  • port forwarding with socks over proxy

    - by Oz123
    I am trying to browse a wiki that runs on a server inside one domain from another domain. The wiki is accessible only on the LAN, but I need to browse it from another LAN to which I connect with an SSH tunnel ... Here is my setup and the steps I did so far: ~.ssh/confing on wikihost: Host gateway User kisteuser Port 443 Hostname gateway.companydomain.com ProxyCommand /home/myuser/bin/ssh-https-tunnel %h %p # ssh-https-tunnel: # http://ttcplinux.sourceforge.net/tools/stunnel Protocol 2 IdentityFile ~/.ssh/key_dsa LocalForward 11069 localhost:11069 Host server1 User kisteuser Hostname localhost Port 11069 LocalForward 8022 server1:22 LocalForward 17001 server1:7100 LocalForward 8080 www-proxy:3128 RemoteForward 11069 localhost:22 from wikihost myuser@wikihost: ssh -XC -t gateway.companydomain.com ssh -L11069:localhost:22 server1 on another terminal: ssh gateway.companydomain.com Now, on my companydomain I would like to start firefox and browse the wiki on wikihost. I did: [email protected] ~ $ ssh gateway Have a lot of fun... kisteuser@gateway ~ $ ssh -D 8383 localhost user@localhost's password: user@wikiserver:~> My .ssh/config on that side looks like that: host server1 localforward 11069 localhost:11069 host localhost user myuser port 11069 host wikiserver forwardagent yes user myuser port 11069 hostname localhost Now, I started firefox on the server called gateway, and edited the proxy settings to use SOCKSv5, specifying that the proxy should be gateway and use the port 8383... kisteuser@gateway ~ $ LANG=C firefox -P --no-remote And, now I get the following error popping in the Terminal of wikiserver: myuser@wikiserver:~> channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused Confused? Me too ... Please help me understand how to properly build the tunnels and browse the wiki over SOCKS protocol. update: I managed to browse the wiki on wikiserver with the following changes: host wikiserver forwardagent yes user myuser port 11069 hostname localhost localforward 8339 localhost:8443 Now when I ssh gateway I launch Firefox and go to localhost:8339 and I hit the start page of the wiki, which is served on Port 8443. Now I ask myself is SOCKS really needed? Can someone elaborate on that ?

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

  • bootmgr image is corrupt

    - by bacord
    I just replaced my HD with a SSD. I did a brand new install on the SSD and I received the following error after booting BOOTMGR image is corrupt. The system cannot boot. A little configuration for your knowledge. The system used to be a dual boot with XP & windows 7. After replacing the original startup HD with my SSD I changed a setting in the bios to AHCI (I have tested changing it back but this did not help). When I look at the stats on the drive in the bios, it claims that the SSD is in a raid configuration despite the settings not being configured that way. Related System Information: Intel 320 Series 80 GB Sata II SSD JetWay JPA78VM3-H-LF AM2+/AM2 AMD 780V HDMI Micro ATX AMD Motherboard AMD Athlon 64 X2 7 GB ram I have performed 2 fresh installs to no avail. Also, followed this guide and performed option 1 and 2. I have done bootsec/fixmbr and /fixboot. So.. any suggestions?

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • Samsung laptop randomly shuts down

    - by Dmatig
    I've rewritten this question because it turned into an indecipherable mess. I have a Samsung R560 laptop that is overheating, and shutting itself down under load consistantly. Thank you quickcel for reccomending me Speedfan to monitor my temps. Here they are (Load / Idle): (Ignore "Temp1 and Temp2", whatever sensors they are they'd always random, pretty sure they're broke). The load temperature is after just 5 minites of playing Fallout 3 - another 5 minutes and it (the GPU - 9600M GS) consistantly breaches the mid 90's then shuts down, so it's hard to get a good picture of it. I'm looking for some solution or way to decrease these temperatures, because they seem far too high even idle. I've tried: Opening up the case and clearing of all dust with compressed air. Updating drivers for my Graphics card Have purchased and am using a notebook cooler I don't want to: Undervolt / underclock (defeats the point of having a more expensive card) Use lower power / performance settings (again, i might as well have bought something cheaper) Is there anything else i can try (software or inexpensive hardware) that can help me fix this? Has anybody had a Samsung laptop and knows if this can be sorted under my warranty, and the turnaround time of sending it off (UK?)(it has always ran hotter than it should, but now at 6 months old is getting hot enough to power off)

    Read the article

  • PC only boots from Linux-based media and won't boot from DOS-based media

    - by Xolstice
    I have this problem where the PC only seems to boot from a floppy disk or CD if it was created as a Linux-based bootable media. If it was created as a DOS-based bootable media the system just freezes at the starting point of the boot process. I originally asked this under question 139515 for CD booting only, and based on the given answers, I was under the impression the problem was with the CD-ROM drive; however, I have since installed a newly purchased CD-ROM drive and the same freezing occurs. This then made me try the DOS bootable floppy disk approach and I was quite surprised that it exhibited the same freezing problem. I then tried try a Linux bootable floppy and everything booted from it without any issues. As I mentioned in my original question, the PC was booting just fine from the DOS-based bootable CD, and then it suddenly decides to pull this freezing stunt. I can't remember if I changed anything in the BIOS settings that may I have caused the problem, but I am wondering if that could be the case - it is currently using the Award Module BIOS v4.60PGMA. Can anyone help?

    Read the article

  • Troubleshooting wireless client-bridged networks between two DD-WRT routers?

    - by KronoS
    I recently purchased a Buffalo N600 wireless router which came with DD-WRT pre-installed. I want to take my old wireless router a Linksys WRT54GL, also with DD-WRT pre-installed, and use it as a wireless bridge for my HTPC and Blu-Ray Player in the other room. I other words, I'm trying to connect to WIRED networks via the wireless on the routers. I followed eactly the instruction from DD-WRT's manual for 'Client Bridged' however I'm still not able to connect to two routers correctly, when the encryption is enabled (WPA2-Personal Mixed) however I am able to connect the two routers when there is NO encryption. I've checked, double checked, and triple checked that EVERYTHING is the same on BOTH routers: Routers 1 & 2 Encryption: WPA2-Personal Mixed Wireless Mode: G-Only Wireless Channel: 6 Subnet Mask: 255.255.255.0 Subnet: 192.168.1.0/254 SSID: Krono$ Primary Router #1 (Buffalo N600) IP Address: 192.168.1.1 Firewall: Enables w/ defaults DCHP: Enabled as DHCP Server Secondary Router #2 (Linksys WRT54GL) IP Address: 192.168.1.2 Firewall: Disabled as per DD-WRT instructions I'm looking for any configurations that I may have missed, or settings that may need to happen in order for this work.

    Read the article

  • Strange Internet Connection issues

    - by Nodren
    I'm attempting to troubleshoot problems with a laptop computer(HP 8510w) while it's connected to a server of mine via Remote Desktop. I double checked all the settings on the win2k3 server for remote desktop to verify that remote desktop isn't what's causing the disconnect issues, and other people using different computers/laptops can all connect to remote desktop correctly with no issues. These problems happen specifically when the laptop is connected via wifi(several different wifi sources, so it's not an ISP issue) as well as connected via a Verizon data card. However there's no network downtime when the laptop is resting in the docking station and plugged into the network with the remote desktop server. These problems have also only recently occurred since a recent hard drive failure in which a new hard drive was purchased and the laptop had a fresh install of windows xp professional. There's no special software used on this machine, just office 2003. So my question is, what could cause two types of internet access to fail while other types do not? If it is infact related to the win2k3 server, why is this particular laptop getting disconnected when others are not and are all on at the same time?

    Read the article

  • PHP script not automatically updating when moved to another server

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via WordPress and Go Daddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to // - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->"; exit; } ob_start(); // start the output buffer ?>

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Hosting a server for websites, ftp and random use at home?

    - by Zolomon
    I'm wondering what's the best option for me if I want to move all my hosted websites (from a hosting company) to a server at my own home? Basically, the needs I have are: be able to host websites using PHP/ASP.NET (haven't really decided yet - both would be preferred!) enable FTP so I can create accounts for my family members to access the server for file handling SSH SSL - for secure connections (this is something you have to buy/apply for per domain, not sure if there are any server side settings that have to be made) be able to stream video remote desktop host home-brew applications that can run as services use either MySQl/SQLite/SQL for relational database storage What should I think of before I buy a server? What hardware will I need, what will limit my server? I basically want to learn networking better as I'm a software and web developer but haven't had the resources to acquire any serious toys until now. At the time of writing, most of my websites have 60 visits/day so I don't suspect them to be very demanding. Is there something I haven't thought of that I should have? What OS would you suggest I run? FreeBSD vs Windows Server vs ?

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • Disabling the charms bar

    - by Kelly D
    How do I disable the the charms bar? I am surprised there is no easy way to disable the feature. Here's what I've done so far: 1.) Disabled right edged swipe gesture in my touchpad settings. Part of the problem is solved as this was probably the most common way the charms bar would pop up. But there are still many other ways it can pop up. 2.) Used regedit and added the key "EdgeUI" with "DisableCharmsHint" set to 1 in HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\ImmersiveShell\ This stopped it from popping up whenever I moved the cursor to the top right or bottom right of the screen. I mean who ever needs to move his/her mouse there? (sarcasm) But there's ANOTHER WAY it pops up: when I move the cursor to the top right of the screen followed by moving it downwards OR when I move the cursor to the bottom right of the screen followed by moving it upwards! How do I disable this method of invoking the charm bar?

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • How to figure out what VirtualBox did?

    - by AndrejaKo
    I'm trying to boot a custom made-in-ASM OS on my recent laptop. The OS is intended to be installed on a floppy and during make creates a bootable floppy. Since I don't have a floppy drive, I installed it on a virtual floppy. After that I used WinToFlash's create bootable MS-DOS USB drive option to transfer the floppy image to an USB flash drive. Then I tried to boot my computer from it but got only a repeating broken string on screen. After all that I made a virtual hard disk image form the flash drive using this tutorial and tried to boot a virtual machine from it. First time I got same problem as on real computer. I then used the reset option and next time and every time after that OS booted correctly. My question is: How do I figure out what exactly happened to the virtual machine between first and second boot? UPDATE I just created a new VM with default settings for windows XP and it has the same problem that I have on a real computer. I was unable to reproduce the procedure which made the first VM work correctly.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Motion - takes snapshot without motion detected

    - by Emmanuel Brunet
    I've been installed the standard motion 3.2.12 package on debian 7.5. I would like to get snapshot ONLY when motion is detected, but it still saves a picture every second without any activity in front of the camera. I'm using a TENVIS JPT3815W IP camera motion.conf here is my configuration file setup_mode off target_dir /media/videos/log/webcam netcam_url http://webcam/snapshot.cgi netcam_tolerant_check on netcam_userpass admin:alpha1237 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off output_all off # detection settings 1-255 default 32 noise_level 50 # Maximum framerate for webcam streams (default: 1) webcam_maxrate 25 pre_capture 0 framerate 25 gap 30 locate on mail [email protected] text_right "FRONT CAMERA %Y/%m/%d - %T" text_double on ffmpeg_cap_new on ffmpeg_cap_motion on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) quality 90 # Restrict webcam connections to localhost only (default: on) webcam_localhost off # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 Issue when I start motion images are stored in /media/videos/log/webcam nearly every second. I hjust want to get images when a motion is detected and the according video clip Any idea where the configuration fails ?

    Read the article

  • VPN Client solution

    - by realtek
    I have several VPN's that I need to establish on a daily basis but from multiple workstations. What I would like to do it have either a server or vpn router that can perform this connection itself and that I can then route traffic through this device or server depending on the subnet I am trying to reach. The issue is that I only use VPN Clients to connect, so I am basically trying to achieve almost a site to site VPN but by using basically a VPN Client type connection from my network. The main VPN Client I use is the Sonicwall Global VPN Client where I initially use a Preshared Key and then it always prompts me for a username and password (not RSA key). My question is, is there any type of linux distro or even a hardware vpn router that can do this and connect to a Sonicwall device as if it were a client? I have tried pfSense which is very good but it fails to connect, probably due to a mismatch of settings. I have tried many others. Even dd-wrt on my router but it does not support whatever protocol Sonicwall uses. (I thought L2TP/IPSec) but it appears it may not be that. Any advice would be great! The other other thing I have thought of that I have not tried yet is Windows Server Routing and Remote Access but I have a feeling that won't work either. Thanks

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Only half of RAM is recognized by BIOS

    - by Rick Crawford
    I got a Gigabyte GA-P35-DS3 mainboard. Some time ago I noticed that Windows only showed 2GB instead of 4GB. I don't know exactly what caused it anyway. I tried putting in each of the 4 x 1GB RAM modules one by one, and tried every slot one by one, until every stick and slot worked. However, then I tried adding one more at a time, and it kept showing 1GB, until I put in all 4, where it only showed 2 GB instead of 4 (in BIOS and windows 7 64bit). I tried replacing the BIOS battery since I've read that low battery could cause it. It didn't help though. I also bought 4GB new RAM (yes, it's supported, I checked it), and it's still the same, it only shows 2GB (or 3GB, when I put in 4 of the new and 2 of the old). I also did the latest BIOS update, and used default BIOS settings, but nothing of that helped. When my PC boots it shows "RAM modules used 2 and 3", when 4 sticks are in - or "0 and 1", when only 2 are in.

    Read the article

  • IIS6 Sending a 404 for ".exe" files.

    - by Tracker1
    Recently a bunch of files I had setup for download via IIS6's web server stopped working. They are a number of setup files ending in ".exe" and were working prior to a few months ago. I have the file permissions set properly, and even enabled browsing in IIS to determine that the paths are indeed correct. I'm not sure if it is related, but the directories with a period stopped working as well. ex: "~/download/ApplicationName/0.9/AppName-setup-0.9.123b2.exe" When I rename the directory to say 0_9 the browsing works, but the file itself delivers a 404 message from IIS. For now, I've setup FileZilla FTP for anonymous access to these files, but would prefer to continue using IIS. I've considered creating an HTTP handler to serve the .exe files, but would really prefer a configuration solution. I just can't figure out why it isn't working, as all the settings are correct. Directory is setup for read access. "Everyone" has read permissions on the files themselves, and the directory browsing (aside from the folder "0.9" to "0_9" rename) shows the files.

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >