Search Results

Search found 10695 results on 428 pages for 'some none'.

Page 281/428 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • PXE boot and DHCP server configuration Failing Auto Installation

    - by Harihara Vinayakaram
    I have a ISC DHCP Server installed on Ubuntu 9.10 . I have managed to successfully boot a PXE client , obtain a DHCP address and load the initrd.gz file. But I am facing a vague problem when the debian installer starts up and tries to get a DHCP server The client send a DHCP request and I verified that is the same MAC Address. But I get a DHCP DECLINE (The client declines the address ). It offers all the address in the pool and then there is a DHCP NAK (no more free leases ) I tried using the Option no-ping, and also option one-client-one-lease but it does not help . If I set the client to use a fixed-address then the above problem is not there and the installation proceeds smoothly Can you give me any clues on what should be the DHCP server configuration My dhcpd.conf looks like this { ddns-update-style none; option domain-name "hadoop-myorg.org"; option domain-name-servers 192.168.3.5; default-lease-time 600; max-lease-time 7200; group { filename "pxelinux.0"; next-server 192.168.13.184; host hadoop1 { hardware ethernet 90:e6:ba:d5:53:f8; } } subnet 192.168.13.0 netmask 255.255.255.0 { option routers 10.0.0.254; pool { option domain-name-servers 192.168.3.5; max-lease-time 3000; range 192.168.13.55 192.168.13.65; deny unknown-clients; } } }

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • How do I connect two computers with a LAN cable?

    - by John
    I have two machines - Windows XP and a laptop using Windows 7. I connected them with a WLAN cable. On the Windows XP machine, I set the IP address to 192.168.0.10. On the Windows 7 laptop, I set the IP address to 192.168.0.20. The laptop can see the Windows XP machine, but Windows XP machine cannot see the Windows 7 machine. But this does NOT concern me. I want to move the files from my desktop (Windows XP) to Windows 7 (laptop). That's why I'm going through all this. The problem is that when I try to connect from Windows 7 to Windows XP machine, I get this window: I don't understand what username/password is needed. I use none on the Windows XP machine. I tried all usernames - no success. Please explain in deep details how to solve my problem so I can connect to my Windows XP machine. EDIT: Maybe this can help: the Windows XP machine is named 'I' and '???????? III' is the name of the laptop. Both computers share one workgroup - WORKGROUP.

    Read the article

  • Re: How can Django/WSGI and PHP share / on Apache?

    - by Bogdan
    in response to: How can Django/WSGI and PHP share / on Apache? Hello, could you please post the complete config file from /sites-available I am having a problem seems like rewrite engine redirects all requests to django, so static and php files are not served and instead i see the django 404 page. If I get rid of rewrite rule then static files and php works. here is my apache config file from /sites-available <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/django <Directory /> Options +FollowSymLinks ExecCGI Indexes AllowOverride None DirectoryIndex index.php AddHandler wsgi-script .wsgi </Directory> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /mysite.wsgi/$1 [QSA,PT,L] ~ and my .wsgi file: import site site.addsitedir('/home/user/.virtualenvs/url.com/lib/python2.6/site-packages') import os, sys path = '/home/www/django' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' sys.path.append(path + '/mysite') import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() import posixpath def application(environ, start_response): # Wrapper to set SCRIPT_NAME to actual mount point. environ['SCRIPT_NAME'] = posixpath.dirname(environ['SCRIPT_NAME']) if environ['SCRIPT_NAME'] == '/': environ['SCRIPT_NAME'] = '' return _application(environ, start_response) the document root directory on disk (/home/www/django) contains php files, images, and the mysite.wsgi file.. thanks for your help

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • Migrating Windows XP BOOT.INI Settings to Windows 7 Boot-loader

    - by Synetech inc.
    Two months ago my motherboard died, so I bought a used computer that came with Windows 7. I have since installed my old hard-drive, which had Windows XP on it, in this system. What I am trying to do now is to figure out a way to migrate the settings from XP's BOOT.INI into 7's boot-loader. Below is the BOOT.INI I used in XP (I have reduced the strings and updated the disks to point to the new location of the old HD. Oh and I am not clear on the drive letters. In XP, I could boot the recovery console or MS-DOS from a file in C:\ that contains the boot-sector. I am not sure what drive letter it would be called now—I had to manually change all the drive letters of the old partitions in Windows 7 because it auto-assigned them all wrong/differently). [boot loader] timeout=10 default=multi(0)disk(0)rdisk(1)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP (Safe)" /safeboot:network /sos /bootlog /noguiboot C:\CMDCONS\BOOTSECT.DAT="Recovery Console" /cmdcons C:\BOOTSECT.DOS="MS-DOS 7.10" /win95 I have looked around, and have only been able to find some bcdedit commands to add XP to the boot-loader, but none that include information on setting safe-mode for it (or changing any of the XP load options for that matter). Not surprisingly I suppose, I have not found anything on adding the XP recovery console or DOS to the Windows 7 boot-loader. (Yes, I tried EasyBCD, but that did not help; it had no options for XP, and the best I managed was to get a choice of booting 7 or normal-mode XP—choosing XP didn't even give the old XP boot menu.) Can anyone please tell me how to export the entries in XP's boot.ini to 7's boot-loader so that on boot, I can choose to load the following: Windows 7 Windows 7 (Safe-mode) (Windows 7 (The Win7 counterpart of the Recovery Console)) Windows XP Windows XP (Safe-mode) Windows XP (Recovery Console) MS-DOS 7.10

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • Does any Certificate Authority support both SAN and wildcards?

    - by nicholas a. evans
    My basic quandry is that wildcard certificates don't support subdomains of subdomains, nor do they help with alternate domain names. Basically, if my CN is example.com, I want a Subject Alternative Name field that looks roughly like so: DNS:example.com DNS*.example.com DNS:*.beta.example.com DNS:example.net DNS:*.example.net DNS:*.beta.example.net Using a self-signed cert, I verified that the browsers will work just fine with this. Unfortunately, none of the Certificate Authorities that I looked into (Thawte, GoDaddy, Verisign, Digicert) seemed to support both wildcard certs and Subject Alternative Name (sometimes referred to as "Multiple Domain UCC"). I even called up GoDaddy tech support to confirm. Is there a CA (trusted by 99% of browsers) that supports wildcards for the Subject Alternative Name? One little restriction: I'm saddled with Amazon EC2's single Elastic IP per instance limitation. Here are what I see as my backup plans: set up three extra EC2 instances, each configured for a different IP address and cert, and nginx reverse proxy from three of them into the app server(s) introduces latency(?), and even the cheapest EC2 instance isn't that cheap instead of dedicated reverse proxy instances, setup the four or more almost identical EC2 app servers, with nginx using the port to determine which cert to deliver, and use haproxy to distribute the traffic amongst themselves. complicated to configure and manage? I'm not using the cheapest EC2 instance type for my app servers. If I don't need 4+ app servers for the load, it raises the cost. set up an external server (outside of EC2) that doesn't have EC2's Elastic IP address restrictions, setup all of the alternate IP addresses and certificates on that server, and nginx reverse proxy from that server into the EC2 app servers. extra IP addresses are almost free (still need to pay for the server of course), but don't come with the robust "elasticity" that Amazon's Elastic IPs provide. even more latency than in the first scenario. Are these approaches crazy or reasonable? Do you have another one to suggest?

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • centos postfix send email problem

    - by Catalin
    Hello. I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Asus WL-520GU conflicting subnet (and/or IP) with 2Wire DSL

    - by Paula
    I have an Asus wireless router: WL-520GU... and an AT&T 2Wire for my DSL connection. When I try to browse anywhere, I just get an odd message from the Asus router (in the common Asus broken-English, bad formatting, and awful spelling): http://postimage.org/image/upxrjflcj I guess it's trying to say: Your Asus Router and your 2Wire have the same subnet mask. (It doesn't say if that's good, or bad... but it sounds like they must be different.) but... But for the "solution" it looks like it's trying to say: Your Asus Router and your 2Wire have the same IP address. My Asus has the defaults: 192.168.1.1 and 255.255.255.0 My 2Wire has: 192.168.1.66 I'm not seeing where the conflict(s) could be. The Asus firmware is v3.0.0.14 . None of these problems occur with the old v3.0.0.8 firmware. Any ideas on how to fix this? (PLEASE don't say to run a totally different DD/Tomato firmware because it's "better". I need to fix THIS 1 problem, not try to convince my company to switch everything to an entirely different set of problems.)

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • Vista ICS issue

    - by Bill Grey
    I have a strange problem with Internet Connection Sharing on a laptop running Vista Business. This laptop is connected to the internet via the ethernet port, which goes to an ADSL modem. it is automatically assigned the IP address 192.168.1.50, and the modem/gateway is 192.168.1.1 My friends laptop is running Vista Home. Previously, I would create an ad hoc wireless network, enable ICS, and everything would be perfect. My friend would have internet access via this. However, something has now mysteriously broken. If I enable ICS on the wireless connection, it resets my Local Area Connection, assigning it the manual IP address of 192.168.0.1, which means my connection to the internet is destroyed. Both wireless adapters on each network are assigned auto configuration addresses, in the 168. range. They can see each other fine, but my friends laptop cannot access the internet via mine, even after I have restored the Local Area Connection settings. I understand the computer with ICS enabled must have the IP of 192.168.0.1, but previously, before whatever went wrong, my wireless adapter would be 192.168.0.1 and my friends computer would get an IP via DHCP. I have also tried setting static IP address and making a bridge, none of which works. How can I fix this problem, and prevent enabling ICS from touching my Local Area Connection? Both machines have no firewall, have appropriate settings etc...

    Read the article

  • Apache2 Modpython : IOError: Write failed, client closed connection.

    - by llazzaro
    This is the error : [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] mod_python (pid=9528, interpreter='realpage.com', phase='PythonHandler', handler='django.core.handlers.modpython'): Application error [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] ServerName: 'realpage.dom' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] DocumentRoot: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] URI: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Location: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] Directory: None [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Filename: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] PathInfo: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Traceback (most recent call last): [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 228, in handler\n return ModPythonHandler()(req) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 220, in call\n req.write(chunk) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] IOError: Write failed, client closed connection. Please! I am sure you need more information in order to find the bug, please tell me what and how to get it. The error is throwing every time!

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • How do I force Windows 7 to recognize my Projector?

    - by user63564
    I have a new Dell XPS with an NVIDIA GeForce GT 445M and an old (several years) Epson PowerLite Cinema 550 projector. Windows 7 refuses to recognize that the projector is connected under normal conditions (I'll get to the strange condition in a moment). Here are some things that I have already tried: Confirm that the projector continues to work well on my old Windows XP laptop. Confirm that the video cable (HDMI to HDMI) is connected Make sure the Dell laptop is plugged in to wall power at all times Reboot both the computer and the projector Click "Detect" under the "Connect to an External Display" Windows dialog (no reaction) Click "Rigorous Display Detection" under NVIDIA Control Panel (dialog: none found) Checked "Force Television Detection on startup" under "My display is not shown..." in NVIDIA Control Panel (no effect) Here's where it gets weird... My projector has three states: off, standby and on. Standby means the power switch on the back is on, but the projector is effectively off (no picture, no access to menu or controls). When I plug in the HDMI cable while the projector is in standby, Windows detects the projector! It lets me switch to Duplicate, Extend, or Project Only mode, and adjusts the resolution appropriately. A new Generic Plug-n-Play monitor shows up in my device manager. A "Seiko EPSON PJ" display shows up in my NVIDIA control panel. Then if I turn my projector on, Windows no longer recognizes the display. This is true whether I turn the projector on while the HDMI cable is plugged in, or if I unplug the HDMI cable while turning on the projector. Anyone have any ideas, because I'm completely stumped...?

    Read the article

  • Unable to stop chrome.exe *32

    - by chipperyman573
    So I was installing roboform today and was unable to stop the process chrome.exe *32... Even when I uninstalled chrome. This is the error I got: I used lockhunter and it said it was located in %appdata%\Local\Google\Chrome. However, it was unable to unlock, delete or rename. When I use explorer to delete or rename that folder, it says it's being used by Chrome. Even after restarting my computer it still does this. I've tried using the built in chrome task manager (Wrench View Background Pages) and I can't seem to find a process there that has the same amount of memory. I have run many, many virus scans, by: Microsoft security essentials AVG (Free version) Malwarebytes (Pro version) Norton 360 (Pro version) McAfee (Pro Version) Avira (Free version) Avast! Antivirus (Free version) None of which returned with any viruses. Chrome info: Google Chrome 23.0.1271.95 (Official Build 169798) OS Windows 7 Professional WebKit 537.11 (@135931) JavaScript V8 3.13.7.5 Flash 11.5.31.2 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • How do I find out the Mac Address Firefox submits for Geolocation?

    - by Jim McKeeth
    I moved, but have the same router and cable box. Now when I visit Google Maps it still shows my old location. I went to the SkyhookWireless update page to update my Mac Address with my new location. The process is you put in your Mac address and it shows you the current location, and then you adjust that location and submit it. When I got the Mac Address from the status page on my router Skyhook reported that location as in Pakistan, which isn't even the correct continent (as my old or new address). I tried every other Mac address I could come up with: The cable router, the Wireless router's other Mac address, my PC's Mac address, etc. and none of them reported any location on the Skyhook page. So I am guessing that there is another Mac address, or some other bit of information that is used by Firefox when it reports to Google Maps my current location. Now that you have the back story, how do I find the Mac address or whatever information it is that Firefox (or other browsers) use to determine my Geolocation? Everything I have read online is rather vague. The next option I am considering is hooking a logging proxy onto Firefox and seeing what data it sends, but I'd rather find an easier method. Related: How do I update the geo location of my house?

    Read the article

  • IP Tables won't save the rule.

    - by ArchUser
    Hello, I'm using ArchLinux and I have an IP tables rule that I know works (from my other server), and it's in /etc/iptables/iptables.rules, it's the only rule set in that directory. I run, /etc/rc.d/iptables save, then /etc/rc.d/iptables/restart, but when I do "iptables --list", I get ACCEPTs on INPUT,FORWARD & OUTPUT. # Generated by iptables-save v1.4.8 on Sat Jan 8 18:42:50 2011 *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [216:14865] :BRUTEGUARD - [0:0] :interfaces - [0:0] :open - [0:0] -A INPUT -p icmp -m icmp --icmp-type 18 -j DROP -A INPUT -p icmp -m icmp --icmp-type 17 -j DROP -A INPUT -p icmp -m icmp --icmp-type 10 -j DROP -A INPUT -p icmp -m icmp --icmp-type 9 -j DROP -A INPUT -p icmp -m icmp --icmp-type 5 -j DROP -A INPUT -p icmp -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -j interfaces -A INPUT -j open -A INPUT -p tcp -j REJECT --reject-with tcp-reset -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -f -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth+ -p icmp -m icmp --icmp-type 8 -j DROP -A BRUTEGUARD -m recent --set --name BF --rsource -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j LOG --log-prefix "[BRUTEFORCE ATTEMPT] " --log-level 6 -A BRUTEGUARD -m recent --update --seconds 600 --hitcount 20 --name BF --rsource -j DROP -A interfaces -i lo -j ACCEPT -A open -p tcp -m tcp --dport 80 -j ACCEPT -A open -p tcp -m tcp --dport 10011 -j ACCEPT -A open -p udp -m udp --dport 9987 -j ACCEPT -A open -p tcp -m tcp --dport 30033 -j ACCEPT -A open -p tcp -m tcp --dport 8000 -j ACCEPT -A open -p tcp -m tcp --dport 8001 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 21 -j ACCEPT -A open -s 76.119.125.61 -p tcp -m tcp --dport 3306 -j ACCEPT -A open -p tcp -m tcp --dport 22 -j BRUTEGUARD -A open -s 76.119.125.61 -p tcp -m tcp --dport 22 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Jan 8 18:42:50 2011

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Wifi Drops Connections with WPA2-PSK

    - by graf_ignotiev
    I run a small computer lab made up of 10 computers of identical hardware and software (Dell Latitudes with Windows 7 x64 Enterprise) and I use a ZyWALL 2WG as a router/firewall. Nine of the computers connect to the router over wifi using WPA2-PSK encryption while the last one is connected by ethernet cable. I'm having a problem where any computer connected to the wi-fi occasionally drops off the network (it cannot be pinged and the client cannot ping the gateway). It only happens on the wifi side and only when the encryption is WPA2-PSK or WPA-PSK. I tried using another router with a different make and model and had no problems. Thinking it could be a software error, I reset the router to factory defaults and installed the newest firmware (V4.04(AQI.8) | 04/09/2010), but still have the problem. The 802.1X log gives the following error User logout because of user disassociation. with this note WPA2-PSK:00242c582ece:logout where 00242c582ece is the mac address of the device. At this point I'm out of things to try and leads to follow. It looks like this user had the same or similar problem, but none of those proposed solutions work for me.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >