Search Results

Search found 18422 results on 737 pages for 'down town'.

Page 526/737 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Unable to create linen from mapped file console error

    - by TheLearner
    Does anyone know what the following logged error messages from console is all about - I am trying to track down the cause of a kernal panic I experienced yesterday: 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:26 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file 13/01/2011 09:59:27 SpringBoard[2048] Unable to create linen from mapped file

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • windows xp blue screen dumping physical memory

    - by dotnet-practitioner
    I get following blue screen after running my laptop for an hour... A problem has been detected and windows has been shut down to prevent damange to your computer. If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to be sure you have adequate disk space. If a driver is identified in the stop message, disable the driver or check with the manufacturer for driver updates. Try changing video adapters. Check with your hardware vendor for any BIOS updates. Disable BIOS memory options such as cashing or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select advanced startup options, and the select safe mode. Technical Information: * STOP 0x0000008E (0xc0000005, 0x805B03F5, 0xF703DC7C, 0x00000000) Beginning dump of physical memory Physical memory dump complete. Contact you system administrator or technical support group for further assistance. so.... if this is a faulty memory.... from where I could buy RAM for following laptop.... TOSHIBA SATELLITE A45-S250 My local Frys store does not carry memory for this laptop.

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • transparent git-svn gateway

    - by azatoth
    Currently we have an subversion repository with the following layout: /trunc /group1 /proj1 /proj2 group2 /proj3 /etc.. /tags /group1 /proj1 /proj2 group2 /proj3 /etc.. /branch /anything temporary I believe this is an rather bad layout, but at the moment it's difficult to change it fully. Personally I dislike subversion, due mostly the long time it takes to check history, and also that branching and merging are cumbersome etc. so I really want to use git instead. Sadly we cant just switch to git as the mental capacity for some might be to overwhelming, so I was looking into git-svn to see if I could practically use that to solve the issue. Sadly that directly ends up in a bad situation as I want to break down each project into one git repo, and I don't want to have to recreate the git-svn checkout on each computer I work on. so I though perhaps there is an possibility to create some sort of transparent git ?? svn proxy/gateway, so that an push to that repo "commits" to the svn repo, and an commit to the svn repo updates the git repo. Google hasn't been my friend, have only found generic usage help to use git-svn, so I ask you if you have some good ideas to accomplish this.

    Read the article

  • SQL Server Backup modes, and a huge log file

    - by Matt Dawdy
    Okay, I'm not a server administrator, a network guy, or a DBA. I'm merely a programmer helping out a small company. They have IT guy who isn't MS centric (most stuff is on Mac) and he and I are trying to figure out a solution here. We've got 1 main database. We run nightly full backups. I know they are full backups because I can take the latest file, or any of the daily backups, and go to a completely new machine and "restore" the backup to an empty database and our app runs perfectly fine off of this backup. The backups have grown from 60 MB to 250MB over 4 months. When running, then log file is 1.7 GB, and the data file is only 200-300 MB. Yes, recovery model is set to full. So, my question, after all of that, if we are keeping daily backups, and we don't have the need / aren't smart enough to roll the DB back to a certain time, if I change the recovery mode to simple, am I really losing anything? And, if I do change it to simple, will it completely dump the log file or at least reduce it way the hell down? And, will that make our database run faster? I know that it'll make my life easier when I copy a relatively recent backup to my local machine to do development and testing...

    Read the article

  • How to access Windows Server 2008 R2 file shares from a different subnet

    - by Lloyd Cotten
    We have a couple of severs that used to be Windows Server 2003 that we recently upgraded to Windows Server 2008 R2. A couple of details to set the situation up: We wiped the OS and re-installed. These servers are on one subnet (172.16.x.x) and we are trying to access some file shares on them from another subnet (10.34.x.x). Firewall is disabled on these servers. Trying to access with UNC "\172.16.x.x\sharename" and net use \172.16.x.x However, we're having problems doing this. We are getting "The network path was not found". Here's some of the things we've tried so far and the result: Tried accessing the share from other (non-2008) servers on the same subnet... Success! Ping servers from different subnet... Success! Telnet connection into port 139 from different subnet... Success! Took a scan through Local Security Policies to see if something obvious needed to be enabled / disabled / configured... Fail I'm not sure where to look next. I know that the router between the two subnets is locked down pretty good, but this did work for our 2003 servers. Has anything changed in the way of ports used for UNC / file share access in 2008? Maybe I'm missing some security policy setting? Hoping somebody can take pity on a poor programming guy that can't figure out something really simple. :-) Thanks!

    Read the article

  • Prevent Exchange Server from advertising itself on domain

    - by Justin Shin
    I'm in the middle of setting up an Exchange 2010 Server. Currently, we use a SaaS provider for Exchange 2007 services. Some (but not all) of my users have been reporting that they are receiving Outlook/Exchange login prompts to login to the new Exchange server. This is happening without any intervention on the client's machines. The Exchange server is a member of the domain and connects to the domain site remotely through a site-to-site VPN. What can I do to prevent these login prompts from appearing? Will shutting down the new server until it is time to switch resolve these issues? A little more info: I found that on one of the client computers, all of the settings for Outlook over HTTP had been changed (automatically) from webmail.provider.com to mail.company.com (the latter being the new server). This happened when I enabled Outlook Anywhere access on Exchange 2010. I changed the client's settings back, and everything was groovy. But, when I disabled Outlook Anywhere again, the logon prompt came back.

    Read the article

  • Lenovo S10 Ideapad will not boot while original hard drive is installed, neither from hard drive or

    - by aki
    Hello, first time posting here so I'll try to be very clear. I have a Lenovo S10 Ideapad netbook which fails to boot to an OS. It shows the Lenovo splash screen and can get to the BIOS but it doesn't get to GRUB (was dual booting Ubuntu 9.10 and Win 7, was working fine for months, ie this isn't a new dual boot gone bad). After the splash screen it displays a flashing cursor in the upper left corner. Power cycled to no avail. Here is what I have done trying to narrow the problem down: The machine will boot to Ubuntu using an install/live USB drive, but only if ANOTHER hard drive is installed or NO hard drive is installed. The boot order always lists USB first. Also, there is a 2 gb RAM upgrade but I think that's fine; the Ubuntu USB drive boots fine with it, and "free" sees the whole 2gb of memory. So it seems like the hard drive is bad. I was able to put the bad drive in a different laptop and mount it to recover files. I'm ready to replace the bad hard drive, but I would like to know if this situation makes any sense. If the hard drive is bad, shouldn't I still be able to boot with the Ubuntu USB drive while the bad drive is installed? I would have expected the machine to boot into Ubuntu anyway even if with a bad drive, since the boot order lists USB first. But it seems that when the bad drive is installed, the machine ignores the USB drive and hangs with the flashing cursor. Thanks for any ideas! Sorry for the long post, I just want to put all the info I have up front! Basically I'm going to buy a new drive, but I am mostly curious if this is a typical or at least not unusual situation.

    Read the article

  • Can Vista Windows Explorer be repaired/fixed?

    - by gurun8
    I'm running Windows Vista Home Premium on a Sony Vaio laptop. I think somehow my Windows Explorer has become corrupted. I don't recall any certain inciting incident like installing an unreputable 3rd party app or hardware failure but just recently my system won't wake after it's been left idle longer that 20 minutes and goes to sleep. I've also had problems launch certain apps, like Adobe InDesign CS3, that just basically freeze the system but leave mouse movements functioning for a short time before freezing the entire system which requires a hard reboot to resolve the freeze. The system seems to run normally when used but I fear there's a looming possibility that this is a house of cards and will all come crashing down soon. My question is this, can Windows Explorer be repaired/fixed? Before reformatting the system and starting over, which is most likely what I'm going to be forced to do, I'd like explore (no pun intended) my options in fixing the problem with a patch or reinstall or something of that nature. Reformatting my system will eat up a day or two of my time and I just don't have the time to spare right now.

    Read the article

  • Using Toshiba 22EL833 as PC display through HDMI input

    - by Oleg V. Volkov
    I had another Toshiba TV - 19SL738 - connected to this same PC and video card (GTX 8800) through DVI<-HDMI (DVI on PC side, HDMI on TV) before, that was working perfectly at it's native resolution 1360x768. Some time ago I had to change to 22EL833 and immediately faced problem with Windows 7 control panel and NVIDIA control panel both reporting native resolution for new TV as 1080i, 1920x1280, despite TV documentation saying that it have same 1360x768 as previous one. Practical tests confirmed that true native resolution is indeed 1360x768, because plugging in through DVI<-VGA and setting custom resolution through NVIDIA panel shown clear colors and crisp image, while setting anything different with either DVI<-VGA or DVI<-HDMI produced horribly distorted or squished images, with almost unreadable slim lines (as in letters, for example). Now, my problem is that there's no drivers for this TV and I'm unable to get good image while connecting it through DVI<-HDMI directly. The best I've achieved is editing EDID/driver manually, to persuade system that native resolution should be 1360x768, and while image became mostly clear, colors turned to some strange washed out effect, with pools of pure yellow, cyan and magenta there and there filling place of other colors. Gradients also became noticeably stripped as well. Somehow it looks like dithering gone bad and makes me suspect that image is still down/upscaled several times internally somewhere along the line. How can I connect this TV to DVI output of my video card to get best possible clear image, correct colors and correct native resolution?

    Read the article

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • Merging cuesheet chapter halves into single track for an audiobook

    - by TheSavo
    I have an audiobook that I have ripped and I need some help constructing chapters. I have already made some cue sheets TITLE "Bookname" PERFORMER "the Author" FILE "File1.FLAC" wave ; 23971906.667 milliseconds TRACK 01 AUDIO TITLE "_Intro" INDEX 01 00:00:00 TRACK 02 AUDIO TITLE "CH 01" INDEX 01 24:15:50 TRACK 03 AUDIO TITLE "CH 02" INDEX 01 66:21:00 TRACK 04 AUDIO TITLE "CH 03" INDEX 01 87:05:00 The audio book is in two files. The chapter at the end of the first file is continued in the second file. However, the second file restates: The publisher Book Title List item Blah blah blah I would like to merge the two 'halves' of the chapter in one seamless track. The only way I can think to do this would be be: Bulk cut down the tracks. Drop the junk info into junk track Continue the track listings as normal Take the two "halves" of the target chapter and build a separate cue sheet for it. I know there has to be an easier way. I am ok with making the 'junk' info a 'gap' or something. These are are FLAC files that will be converted to MP3 for my phone and other potable devices. I have read the primers on cue sheets, but I am just not getting it.

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Textmate add multiline text at end of line

    - by Yuval
    In Textmate, I am able to add text to several lines at once by clicking and holding the Option key and dragging with the mouse. say I have the following lines: foo 1: foo 2: foo 3: I can easily click and hold option and then drag down with the lines to select the text at the end of each line, and then type "bar" once and it will be added to all lines, as such: foo 1: bar foo 2: bar foo 3: bar Fantastic. The problem I run into, is when my lines aren't the same length, as such foo 19: foo 37842342346: foo 503: Now if I want to add text to the end of each line, I have to either do it manually, or choose enough space so that the longest line is not overwritten, as such: foo 19: bar foo 37842342346: bar foo 503: bar This results in a lot of unwanted whitespace in lines that don't need it. Granted, I could easily run a regular expression search to replace all multiple occurrences of a space with a single one, but I was wondering if there's a way to select all ending of lines at once without having to do that. Any idea? Thanks!

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • AD domain on web servers behind NAT - DNS issues?

    - by Ant
    I'm trying to setup an AD domain to manage the security between two Windows Server 2008 webservers that will sooner or later use NLB to balance website requests. I've hit a problem which I think is a simple solution and is down to DNS. My website domain is mydomain.com. The two servers are running behind a NAT firewall on the 10.0.0.0 IP range. I've setup the AD domain to be called ad.mydomain.com (as recommended by MS and a few other answers to questions on here). The second web server however doesn't want to join the domain, and gives an error pinning the problem on DNS - "ensure that the domain name is typed correctly" even though it queries the SRV record successfully and gets the correct DC back - dc.ad.mydomain.com. Doing a dcdiag /test:dns on the DC gives the Delegation error 'DNS Server dc.mydomain.com Missing glue A record'. I have a feeling I need to add something to the public DNS so that it in some way knows about ad.mydomain.com. Can anyone suggest whether I'm on the right track in adding something to the public DNS? Or whether it's something else? Many thanks

    Read the article

  • Laptop hangs on POST and does not finish except on rare occasions

    - by user1049697
    I have an old Toshiba Satellite A100 laptop that hangs on POST when I try to start it. On rare occasions it does finish the POST and boots Windows successfully, but most times it just finishes it partially and continues to hang. I can enter the BIOS though when it has frozen, but I have to open the DVD-drive first for some reason. The keyboard is not quite right either, and I can't navigate the BIOS properly because the arrow keys doesn't work. I tried an external keyboard, but the problem persisted. I have tried to remove the memory, hard drive, and battery to see if any of these were the problem, but it did not solve it. The one logical thing left to do would be to remove the CMOS battery, but the "brilliant" engineers at Toshiba have place it such that a complete disassembly of the machine is necessary. What this all boils down to is basically the question of whether I can "save" this machine and get it to boot properly, or if I should just send it off to recycling. I suspect it might need costly repairs, but I can't bring myself to throw it away before I have made sure it's completely dead.

    Read the article

  • ESXi Guests will not boot on IBM x3550 M3

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. VM Host Server: IBM x3550 M3 7944AC1 server w/ 2x Intel Xeon E5607 2.27Ghz CPUs ESXi 5.0.0 Build 623860, built for IBM Hardware downloaded from IBM Storage: 2x500GB SAS local storage 8GB RAM Vt is verified to be ENABLED Server Health Status: Normal The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspucious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • OpenVZ with brdiged interfaces and VLAN

    - by Deimosfr
    Hi, I've got a problem with OpenVZ with brdiged VLAN. Here is my configuration : +------+ +-------+ +-----------+ +---------+ br0 |VE101 | | | | OpenBSD |----->| Debian |------->| | | WAN |--->| Router | | OpenVZ | +------+ | | | Firewall |----->| br0 br1 | br1 +------+ +-------+ +-----------+ +---------+------->|VE102 | |br0 | | |VLAN br0.110 +------+ v +---------+ |VE103.110| +---------+ I can't make VLAN working on br0 (br0.110) and I would like to understand why. I don't have any switch so no problem with unmanageable switch. I've configured a VLAN interface on OpenBSD in /etc/hostname.vlan110 : inet 192.168.110.254 255.255.255.0 NONE vlan 110 vlandev sis1 And it seams working fine. I've also adapted my PF configuration to work with VLAN but I don't see any incoming traffic. On my Debian lenny, here is my interfaces configuration : # The loopback network interface auto lo iface lo inet loopback # br0 auto br0 iface br0 inet static address 192.168.100.1 netmask 255.255.255.0 gateway 192.168.100.254 network 192.168.100.0 broadcast 192.168.100.255 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off # VLAN 110 auto br0.110 iface br0.110 inet static address 192.168.110.1 netmask 255.255.255.0 network 192.168.110.0 gateway 192.168.110.254 broadcast 192.168.110.255 pre-up vconfig add br0 110 post-down vconfig rem br0.110 It looks like ok, but when I start my VE, here is the message : ... Configure veth devices: veth103.0 Adding interface veth103.0 to bridge br0.110 on CT0 for VE103 can't add veth103.0 to bridge br0.110: Operation not supported VE start in progress... So I've got one error here. I've followed this documentation http://wiki.openvz.org/VLAN but it doesn't work. I've certainly missed something but I don't know why. Someone could help me please ? Thanks

    Read the article

  • Disc drive busy on MacBook, with disc stuck inside.

    - by ayaz
    I have a white MacBook, running Snow Leopard, 10.6.3 with the latest updates. I popped in a DVD that the system failed to mount. I did not see any conspicuous errors. As a result of this failure, the DVD got stuck in the drive. Neither pressing the eject button on the keyboard nor running the diskutil eject command caused the DVD to come out. The commands drutil eject and drutil tray open could not get the DVD to budge at all. The 'mount' and 'eject' buttons on the window for Disk Utility are dimmed out, while it is written in the middle for the DVD drive that that particular disc drive is busy. This is not the first time this has happened with me. I know that I will ultimately have to resort to rebooting the system and holding down the eject button to get the DVD to come out. But, is there any workaround that does not involve rebooting the system and prying the disc out? The drive on this MacBook does not have a needle-pin reset button -- at least, I couldn't find it anywhere. Any help will be greatly appreciated. Many thanks.

    Read the article

  • "Error: Unknown error" when trying to start virtual machine from VMware server

    - by slhck
    Problem We are running VMware Server 2.0.0 build-116503 on a Ubuntu 10.04 LTS server. There is a virtual machine installed, running Lotus Domino on Windows Server 2003. Ever since a sudden power failure last week, the virtual machine won't properly start up. When I run the command: vmrun -T server -h https://127.0.0.1:8333/sdk -u root -p jk2x2208 start "[standard] lotus/test.vmx" … after 30 seconds it displays: Error: Unknown error That's about everything I get. I know the command is right, since that's what we've used all the time. This has happened last Saturday after a scheduled backup shutdown, and somehow I was able to start it again. This week, it happened again, and I can't get it back up. Occasionally, I also get: Error: Cannot connect to the virtual machine When I get this, and I run the start command, it seemingly works. Why is this so random? Which configuration could have been messed up? What I've tried / other info I already shut down VMware itself with /etc/init.d/vmware stop. This works. I tried to start VMware again with /etc/init.d/vmware start. It complains that it's "not configured", which is why I had to rm /etc/vmware/not_configured, and then try to start again. There have been no software updates on the machine, and no configuration changes

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >