Search Results

Search found 23084 results on 924 pages for 'jeff main'.

Page 241/924 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • New AD-DC in a new Site is refusing cross-site IPv4 connections

    - by sysadmin1138
    We just added a new Server 2008 (sp2) Domain Controller in a new Site, our first such config. It's over a VPN gateway WAN (10Mbit). Unfortunately it is displaying a strange network symptom. Connections to the SMB ports (TCP/139 and TCP/445) are being actively refused... if the connection is coming in on pure IPv4. If the incoming connection is coming by way of the 6to4 tunnel those connections establish and work just fine. It isn't the Firewall, since this behavior can be replicated with the firewall turned off. Also, it's actually issuing RST packets to connection attempts; something that only happens with a Windows Firewall if there is a service behind a port and the service itself denies access. I doubt it's some firewall device on the wire, since the server this one replaced was running Samba and access to it from our main network functioned just fine. I'm thinking it might have something to do with the Subnet lists in AD Sites & Services, but I'm not sure. We haven't put any IPv6 addresses in there, just v4, and it's the v4 connections that are being denied. Unfortunately, I can't figure this out. We need to be able to talk to this DC from the main campus. Is there some kind of site-based SMB-level filtering going on? I can talk to the DC's on campus just fine, but that's over that v6 tunnel. I don't have access to a regular machine on that remote subnet, which limits my ability to test.

    Read the article

  • USB device goes to "Unknown device" randomly

    - by ILovePaperTowels
    We have a kiosk that uses a bio scanner from m2sys (usb device). It scans your palm to recognize you. Every so often, maybe 1-3 times a day, the bio scanner will become an unknown device. We are unable to see any patterns or commonalities. When we unplug and plug it back in, it becomes available again. We have custom software that uses the bio scanner's software to communicate with it. We've added a crap load of logging on everything but there doesn't seem to be any pattern of when the thing shuts off. We have these devices deployed to multiple locations (100+) and they are all seeing the issues but we can not reproduce it here, at the main office. I've evaluated the software and I don't see anything. I'm thinking it's a driver or hardware issue (but we can't reproduce the issues here in the main office) or maybe environmental interference of some sort like from scan guns, automatic doors, microwaves or something else. Any ideas would be welcome. I'm looking for possible causes of the usb device becoming unknown or ways to figure out what the cause is. no other usb devices have this issue, only the scanner We've contacted the manufacturer and they blame our software We're getting help from Microsoft, but they haven't found anything OS is embedded XP http://www.m2sys.com/palm-vein-reader.htm

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Help: Setup Outgoing Mail Server Only for Multiple Domains Using Postfix?

    - by user57697
    I want an outgoing mail server ONLY for multiple domains. I plan to use Postfix as that seems to be the easiest to setup being very new to Ubuntu/Linux. The setup I plan to have are as follows: I want to use virtual domain with postfix i.e. my multiple websites must be able to send an email from each their respective domains i.e. [email protected] is sent from my domain1.com website and [email protected] is sent from domain2.com website This is an outgoing mail server only i.e. I don't want any returned (or otherwise) email sent to my postfix server. Incoming mail is handled by Google Apps/Gmail and is already setup. I already set my SPF recording to designate my mx records and postfix server ip as valid email servers i.e. "v=spf1 mx include:mydomain.com -all" How can I achieve this? I'm frankly a little confused, so some help would be appreciated. I attempted to follow these guides here, but it doesn't seem right (and it isn't clear what all the settings mean): How to configure Postfix virtual domains http://www.sysdesign.ca/guides/postfix_virtual.html Postfix Installation *.slicehost.com/2008/7/29/postfix-installation Basic Postfix settings (main.cf) *.slicehost.com/2008/7/31/postfix-basic-settings-in-main-cf I can only post one link, but those articles above can be found by replacing * with articles in the hyperlink.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • ssh use with netcat to forward connections via bastion host to inside machine

    - by Registered User
    Hi, I am having a server in a corporate data centre who's sys admin is me. There are some virtual machines running on it.The main server is accessible from internet via SSH. There are some people who within the lan access the virtual machines whose IPs on LAN are 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 the main machine which is a bastion host for internet has IP 192.168.1.50 and only I have access to it. I have to give people on internet the access to the internal machines whose IP I mentioned above.I know tunnel is a good way but the people are fairly non technical and do not want to get into a tunnel etc jargons.So I came across a solution as explained on this link On the gateway machine which is 192.168.1.50 in the .ssh/config file I add following Host securehost.example.com ProxyCommand ssh [email protected] nc %h %p Now my question is do I need to create separate accounts on the bastion host (gateway) to those users who can SSH to the inside machines and in each of the users .ssh/config I need to make the above entry or where exactly I put the .ssh/config on the gateway. Also ssh [email protected] where user1 exists only on inside machine 192.168.1.1 and not on the gateway is that right syntax? Because the internal machines are accessilbe to outside world as site1.example.com site2.example.com site3.example.com site4.example.com But SSH is only for example.com and only one user.So How should I go for .ssh/config 1) What is the correct syntax for ProxyCommand on gateway's .ssh/config should I use ProxyCommand ssh [email protected] nc %h %p or I should use ProxyCommand ssh [email protected] in nc %h %p 2) Should I create new user accounts on gateway or adding them in AllowedUsers on ssh_config is sufficient?

    Read the article

  • Outlook 2007, how to stop replies from being sent if they meet criteria

    - by CChriss
    I have my college's email account auto-forwarded (they don't offer pop3 or imap) to my main gmail account so I don't have to go to their website to check for mail. That gmail account, and others, use pop3 down to my Outlook 2007. In my Outlook, I have a rule that puts mail sent to my @college address into a folder named Collegemail. Since the college email system lacks pop3 or imap, I have to go to their site to send a reply, so that the reply will be sent from my @college email address. The problem is that I cannot seem to get into the habit of going to my college's website to reply to those emails, so quite often I reply to them while in Outlook, which means my replies are sent "from" my main gmail address. How can I set up Outlook to either 1) disable replying to emails in the Collegemail folder or to emails where my college email address is in the original To: line, or else 2) set it to send those replies from a non-functional dummy account. The dummy account would be unable to send the email so the email would be stuck in the Outbox. (I already have the non-functional dummy account set up as Outlook's "Default" account. When I compose a new mail I have to deliberately pick one of my good accounts to send the new mail with, or else Outlook uses the default dummy account and the email stays in the Outbox with a Send error.) Thanks. Update: Anybody? Please help.

    Read the article

  • Need help with MS Access 07 & Reports

    - by Moe
    Hey there, I'm finding it difficult to get MS reporting working to what I'd like to show. What I'm trying to do is: a) In my database store a URL file (HTTP external file), that is a .jpeg. I'd like to use that URL to call the image on the report sheet. I have tried to use 'Control source' on the data panel, but with no success. Any way I can get Dynamic Images to show up on each database. Also, I have a couple of Relational Databases. One Defines Values: For Example: DefinePets('petID','Name of Pet') The other one links the Main DB with the 'DefinePets' database. Eg: connect('petID','mainID','extraFeild') I'd like my report to Go into the "connect" Table, where the the currently viewed Record Value = mainID, then find petID and return Name of Pet. There is a many to many link between definePets and the main Table. (Therefore connect is joining them up) Or is that too much to ask from a simple package like Access? Thanks.

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • What to do when launchpad is down?

    - by Jon
    As I am writing this (Friday, November 8, 2013 at 9:59:18 PM EST) launchpad is down. Apparently there is a power failure (https://twitter.com/launchpadstatus/status/398980619880775680). I tried running sudo apt-get update on my Ubuntu install. However, I simply get stuck on this: Ign http://ppa.launchpad.net precise InRelease 100% [Waiting for headers] Being a Ubuntu newbie, I tried to point my sources.list file to a different source. I backed up the original sources.list and then deleted the entire file to start afresh. I then added the following lines to it: deb http://mirror.anl.gov/pub/ubuntu/ precise main deb-src http://mirror.anl.gov/pub/ubuntu/ precise main I figured that since I have a different mirror, there would be no problem updating. I was wrong. I get stuck at the same place. I have several questions: Why do I need to hit launchpad? I do not reference it in my sources.list file at all. Is this something where the mirror redirects me to launchpad? Is there a good article out there that I can read on how exactly this whole apt-get update thing works that will help me understand why it is hitting launchpad? Is there any way to get my Ubuntu to update while launchpad is down? Isn't there any redundancy for the launchpad servers?

    Read the article

  • Make isolinux 4.0.3 chainload itself in VMWare

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Toshiba External Hard Drive freezes computer

    - by Ephraim
    I bought a Toshiba Canvio Basics E05A032BAU2XK Portable External 320GB 2.5 Hard Drive: My computer has two Os's on it Win7 and Win XP. I need both. The main one I use is XP. When booting my computer in any OS the computer and hard drive work fine. The same holds true for plugging in the hard drive while running Win7. However, when running WinXP, if the hard drive gets plugged in the computer freezes(my main point is that the HD is portable so it is essential that it does not do this, as I said I usually run XP). After reading some online forums I was informed that there is a compatibility issue with the newest version of Eset Smart Security(I still don't understand this because it works fine in Win7 or when connected on boot...). I disabled the AV and plugged in the HD... Walla! The comnputer did not freeze. However the disk is not recognized in explorer or disk management. In device manager I removed the device and did a scan and installation of device failed. It pretty much sounds like a driver issue but I cannot find any drivers for this HD. In fact, Toshiba claims that there are no downloadable drivers for it and that XP should take care of the drivers itself. What to do? As far as I can tell, all other USB devices work just fine on both OS. Please Help!

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • Running a VM off a USB 2.0 Flash Drive - Mac/Parallels/XP

    - by geerlingguy
    I use a MacBook Air as my primary machine, and the 128GB SSD means space is precious. To save about 10 GB, I've been running Parallels with a Windows XP VM off an external USB hard drive, which performs as well in everyday use as running the VM off the internal SSD. So, I bought a tiny 32GB USB 2.0 flash drive, plugged it into the MacBook Air, formatted it first as ExFAT (which was slow), then as Mac OS Extended (Journaled) (which was also slow), and copied over my VM file, and ran Parallels off it. My full experience is documented here: http://www.midwesternmac.com/blogs/jeff-geerling/running-windows-xp-vm Straight file copies are really fast — 30 MB/sec read (solid the whole time), and 10-11 MB/sec write (solid the whole time). But I noticed that once XP started running, the disk access rates were in the low KB ranges. Are USB flash drives really that poor at random access, or could I possibly be missing something (the format of the flash drive, etc.?)? Of note, I've tried the following, to no great effect: Formatting the drive as either ExFAT or Mac OS Extended (Journaled) Unplugging all other USB devices and turning off Bluetooth (which runs on the right-side-port USB bus). Plugging in the flash drive either direct in the right side port, or the left side port, or into a USB 2.0 hub

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • Firefox 3.6 and above. Always show one tab even if all tabs are closed like Firefox 3.0

    - by Jayapal Chandran
    I very much got used to Firefox 3.0. In that, to free the memory, I close all tabs but still the main Firefox window does not close. But in Firefox 3.6 and later if I want to close all tabs and if I do so then Firefox totally exists. This is not the case with 3.0. How to stop Firefox from not closing the main process even if I close all threads (tabs)? The autocomplete feature in the address bar of Firefox 3.6 and greater is in a dark blue color which makes me very much annoyed. With my environment and the monitor glare that is inducing anger in me, so how the color be changed to be like Firefox 3.0? Because you know that black and white are a neutral and good combination and since I have been working in Firefox 3.0 (and earlier versions) for a long time this new color change and other uncomfortable options are making me sick. To check CSS3 I need to use Firefox 3.5 and greater. Besides I like Firefox because it includes the W3C's recommendations so I can learn and test new recomendations from W3C.

    Read the article

  • How to use different scaling for every monitor?

    - by Mikael Koskinen
    One of the new features in Windows 8.1 is the new "Desktop display scaling", which allows user to configure scaling per monitor. I've been trying to get this working in preview but with no success. If I configure the scaling, it always affects all of my monitors. I have two monitors, the main one with a higher resolution and the secondary with "normal" resolution. The secondary monitor is used in portrait mode. I would like to configure the main monitor's scaling as the text is currently too small. Here's how things look at the configuration screen by default: Now if I adjust the scaling, click apply and do relogin, everything is bigger. On both of my monitors. I haven't clicked the "Let me choose one scaling level for all my displays", but still the slider seems to affect both of them. If I check the "Let me choose one scaling level", the UI changes to look similar to what we have in Windows 8: Still the problem persists. The scaling is applied to both of my monitors. So, it doesn't matter if I check the box or not, the scaling is always applied to all the displays. Any idea how I could get this to work in preview? I've read some comments which seem to indicate that this should work in preview, though Paul Thurrott mentioned at his Winsupersite article that he either didn't get this to work.

    Read the article

  • gentoo zlib removed = portage corrupted

    - by Shamanu4
    Hello I'm using gentoo not for a long time and have made such mistake: I've removed zlib package from system. Now i've got my portage system corrupted: # emerge --sync Traceback (most recent call last): File "/usr/bin/emerge", line 36, in <module> from _emerge.main import emerge_main File "/usr/lib64/portage/pym/_emerge/main.py", line 41, in <module> from _emerge.actions import action_config, action_sync, action_metadata, \ File "/usr/lib64/portage/pym/_emerge/actions.py", line 44, in <module> from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph File "/usr/lib64/portage/pym/_emerge/depgraph.py", line 40, in <module> from _emerge.FakeVartree import FakeVartree File "/usr/lib64/portage/pym/_emerge/FakeVartree.py", line 11, in <module> from portage.dbapi.vartree import vartree File "/usr/lib64/portage/pym/portage/dbapi/vartree.py", line 56, in <module> import re, shutil, stat, errno, copy, subprocess File "/usr/lib64/python2.6/subprocess.py", line 430, in <module> import pickle File "/usr/lib64/python2.6/pickle.py", line 1258, in <module> import binascii as _binascii ImportError: libz.so.1: cannot open shared object file: No such file or directory How can I reinstall zlip package and repair system?

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >