Search Results

Search found 14311 results on 573 pages for 'stan note'.

Page 482/573 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • Tomcat and IIS 7 both on different ip's and different ports

    - by n00b
    I have Tomcat and IIS 7 installed together on a Windows 2008 server. The machine has two IPs (134.133.1.1 and 134.133.2.2). I want Tomcat to handle 134.133.1.1, on port 80, and IIS to handle both 134.133.2.2, on port 80 AND 134.133.1.1, on port 443, but can't seem to get the last two together (I can get one or the other by themselves on IIS, along with the first IP address on Tomcat). I have configured Tomcat to successfully listen to ip 134.133.1.1, on port 80 with this configuration; <Connector port="80" protocol="HTTP/1.1" address="134.133.1.1" connectionTimeout="20000" redirectPort="8443" /> I also have a site configured in IIS bound to ip 134.133.1.1, on port 443 (SSL). When I turn on IIS, after Tomcat, I can reach both 134.133.1.1:80 (Tomcat) and 134.133.1.1:443 (IIS) successfully (as desired). The problem now comes when I want to introduce a new site via IIS, at the new ip address. In IIS I have setup a new site at IP 134.133.2.2, port 80. I can not start the site. The event log shows this error; Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. The data field contains the error number. I think this is because IIS 7 tries to listen to port 80 on all IPs, and it cant because Tomcat is taking port 80 for 134.133.1.1. From reading, the resolution is to specify the IP address you want IIS to bind on port 80. The problem is, when I add 134.133.2.2 to the iplisten list, then I get a 404 when I try navigating to 134.133.1.1:443. I assume this is because IIS is no longer listening to ANY port on 134.133.1.1. How do I resolve this such that IIS will return both sites? EDIT: Per request my IIS binding for site A is 134.133.2.2 on port 80 (http) and 134.133.2.2 on port 443. For site B in IIS, the binding is 134.133.1.1 on port 443 (https). Note the IPs in this example are just for example purposes, but consistent with my setup.

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Some sites won't load on Ubuntu/Mint

    - by Or W
    I have a REALLY weird problem with either my network or my OS. Last week I've suddenly had difficulties loading some websites or even more odd some parts of different websites. For example, I could load gmail.com, login and view the list of emails in my inbox but when I clicked one of them it would just time out. Another example is http://www.ynet.co.il, I can view the home page but going into any one of the articles fails (times out). I've tried Chrome, Firefox and Opera, all fail the same way. If I take a URL of a page I cannot load via the browser and try to wget it though the console I get the file just fine. I've formatted my machine (Used to run Ubuntu 13.04) and installed Mint Linux this time, it worked fine for a few days and now, again, having the same exact issues. Important to note that I have other machines connected either directly or via Wi-Fi to the router and they are all working fine (two win7 machines and 1 raspberry pi). Another strange behavior is that I can ftp or ssh to remote machines but cannot send files via ftp (times out) even if I set passive mode ON and when using ssh I can do just about anything but I cannot paste text into the remote machine, for example if I nano a file on the remote machine and try to paste anything from my clipboard it freezes. What I've tried so far: Disable IPv6 on the networking admin (and on firefox disabling ipv6 on the about:config page) Changing the port and the network cable I went to the store and bought a new standalone PCIe network adapter Connected my win7 laptop using the same cable and router port (sites that were not working on my Mint are working just fine on the win7 machine) Loaded Mint from a livecd, got the same result Tried changing the MTU (was 1500, tried 1492) Some observations: When I clear my browser cache and go to facebook.com for example, the homepage loads but I fail to load any profile/group page. If I refresh facebook.com homepage a couple of times it stops and fails to load until I clear my browser cache. I changed the chrome cache folder permissions to 0777 but that did not help. When I run netstat -n I see A LOT of connections that are in 'FIN_WAIT' mode (I'm guessing that's when I try to refresh pages that are not working and timing out), I have no idea what it means or if it helps anyone figure out what's wrong. The sites that are not loading correctly are always that same, they don't vary or anything and they fail to load exactly the same way on all three browsers that I've tried. When I Googled 'Ubuntu some sites not loading' I see a huge amount of complaints just like mine, but none of them that I could find actually says what the problem is or how they fixed it. Technical stuff: netstat -n ps aux netstat -nr

    Read the article

  • How do I permanently delete e-mail messages in the sendmail queue and keep them from coming back?

    - by Steven Oxley
    I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog: Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?

    Read the article

  • A dusty server room

    - by pauska
    Here's the story.. The owners of the building we lease office space from decided to do a renovation of the exterior. This involved in some pretty heavy work at the level where our server room is, including exchanging windows wich are fit inside a concrete wall. My red alert went off when I heard that they were going to do the same thing with our server room (yes, our server room has a window. We're a small shop with 3 racks. The window is secured with steel bars.) I explicity told the contractor that they need to put up a temporarily wall between our racks and the original wall - and to make sure that the temporary wall is 100 % air and water-tight. They promised to do so. The temporary wall has a small door in it, so that workers can go in/out through the day (through our server room, wich was the only option....). On several occasions I could find the small door half-way shut while working evenings/nights. I locked the door, and thought that they would hopefully get the point soon and keep the door shut. I even gave a electrician a mouthful when I saw that he didn't close the door properly. By this point - I bet that most of you get a picture of what happened. Yes, they probably left the door open while drilling in the concrete. I present you our 4 weeks old EMC VNX: I'll even put in a little bonus, here is the APC UPS one rack further away from the temporary wall. See the nice little landing strip from my finger? What should I do? The only thing that comes to mind is to either call all our suppliers (EMC, HP, Dell, Cisco) and get them to send technicians to check out all the gear in the server room, or get some kind of certified 3rd-party consulant to check all of it. Would you run production systems on this gear? How long? Edit: I should also note that our aircondition isn't exactly enterprise-grade, given the nature of our small room. It's just a single inverter, wich have failed one time before I started working here (failed inverters usually leads to water dripping out).

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • Problems with Windows XP Plug and Play devices, maybe relating to MSVCR71.dll

    - by Richard
    I believe this question is unanswered as of now so I appologize if I've overlooked it. I have been having trouble some external devices with windows recently and I'm trying my darnedest to get to the bottom of it. At first, my Zero Tension USB mouse would stop working...as in the laser in the bottom of it would be on and would register movement, but the mouse on the screen wouldn't budge even an inch. At first this would happen randomly and then it would correct itself. As time went on, it became more and more frequent. At some point, the computer would make the "doo doo" sound of plugging or unplugging a USB device when the mouse stopped/started working. I dealt with it for a while and usually if I rebooted my machine, the mouse would work again for a day or two. As more time has gone on, the computer fully does not recognize the mouse AT ALL...I have another mouse that I use with the computer that works just fine and cannot seem to figure out why my Zero Tension mouse has failed. I tried plugging the Zero Tension mouse into my Mac and low and behold, it works without hesitation and never stops on me... Needless to say, I am stumped about this. I figured because I had another mouse I could deal with the loss of my fancy one for now...until my speakers stopped being recognized. I have a set of Logitec speakers that I have plugged into my sound card. Again, every now and again the audio devices would cease to be recognized by my computer, but a reboot would fix the problem. Now my speakers do not work at all with my computer and I feel like it's time to ask for help. My computer seems to be having a neural shutdown...where I can plug in devices and the computer doesn't seem to notice anything wrong, but none of the devices work. I hope this doesn't get any worse! Please help! Also, on a potentially (un)related note, when I start up my machine I get the message "This application has failed to start because Msvcp71.dll was not found. Re-installing the application may fix this problem." in reference to qbupdate.exe I don't know if that DLL being messed up has anything to do with my mouse or speakers, but I figure it might...anyway, thanks in advance for an answer and let me know if I need to clarify anything. Let me sum up: Zero Tension Mouse gradually stopped working Logitec Speakers gradually stopped working MSVCR71.dll seems to be messed up I don't know if any of those are related but any help would be much appreciated

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • Missing startup screens and slow bootup/login after using WinClone to expand Bootcamp partition

    - by user26453
    I used WinClone to backup my Bootcamp partition, which was a Windows 7 Ultimate install, on my late 2006 Macbook Pro. I desired to expand the Bootcamp partition's size. It worked reasonably well with some hiccups along the way and some remaining issues. First issue I ran into was the Bootcamp Assistant utility - it would not recreate the partition. This was due to a lack of contiguous space that is required for the Bootcamp partition. As a result I wiped the whole drive and reinstalled Snow Leopard, did the minimum amount of system updates, and created and formated a new Bootcamp partition. WinClone restored the image without complaint and the image was automatically resized to the new partition's size. Second issue I ran into was after the first boot into Windows. The first thing I noticed was that instead of the newer "slick" startup screen (4 colors wisping around, a Windows 7 title), there was more of an old school style startup screen (a progress bar with block increments, yellow/greenish color, nothing else really). The initial bootup to a login screen was slow, perhaps as Windows dealt with the partition changes. After logging in, the screen goes blank and the computer seems to hang for a minute, before completing the login. After subsequent restarts, the slick screen is still missing, boot to login screen is normal, but the time from login to desktop active is still very slow. As a side note, this behavior of a long time from login to the desktop finally loaded I've previously only seen when the computer would try to hibernate and fail (battery is really bad). On the next startup, I would see this behavior, but not subsequently. So a potential cause: I imaged the partition after hibernating out of Windows. From reading some posts/guides on the subject, this was not recommended, and perhaps shouldn't even have worked? Could the partition be stuck in some weird mode as a result that makes the boot issues appear? I've attempted to disable hibernation and restart, trying to delete the .sys file that hibernation uses. Other fixes I'm thinking of attempting are booting a Win7 disc and repairing the install/partition. I can't shake the nagging feeling something isn't right as a result of the modified boot screens and the slow login process.

    Read the article

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • Scientific Linux - mysql and apache fail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. ________________________________________ Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. I tried: chkconfig --levels 235 httpd on /etc/init.d/httpd start and the same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool And the Apache logs: [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem?

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • DNS NS and domain clarification

    - by thejartender
    I am really trying to get my home web server up and I don't seem to be succeeding. My web server withing my host system is running my web application and is viewable at the current isp ip 88.89.190.171 over WAN indicating that the webapp is fine and that router ports are forwarded. I have set up a DNS on this system with a single name server in the network and I manage to ping it with ping ns.thejarbar.org I have registered this private name server at my current hosting provider. My domain (thejarbar.org) is obviously registered and I have pointed it to my name server. My question here is if it is simply a matter of waiting on propagation for me to be able to ping my domain? Another way of asking this is if the fact that my name server is discoverable indicates that I have set it up correctly to be used? I have tested with dig and dig -x on my host and have A records for the name server. The server is not the Authorative server so I am concerned that this may be the reason why my site is not discoverable. Is there anything else I may need to so still? I only have one ns. currently, but should this succeed I will be purchasing a more stable secondary system to host my development applications. This is my best chance at getting work (freelance development) due to illness) and this I feel is the last step I need to succeed. Please note that this is temporarily a home server and I will most likely be using it as part of a professional setup very soon I will likely have to repeat this question therefore in a prefessional context in a few weeks as nothing will be different other than the fact that I am going to have a server running elsewhere. I am using bind9 and Ubuntu 12.10 and my records are: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 3D 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. My localhost IP is 10.0.0.42 I wish for this to be my host and name server.

    Read the article

  • IIS httpTracing setting has no effect

    - by digahill
    I'm trying to troubleshoot some performance issues we are having on a specific ASP.NET page with Microsoft's Perfecto Tool on IIS 7.5. Perfecto uses the ETW hooks build in to IIS to report on specific HTTP request, and is working quite well. However, I only want IIS to emit traces for one specific page, say "Default.aspx" in my TestApp Web Application. Following the instructions on the httpTracing man page, I should be able to add the traceUrls element to my root web.config file for TestApp. This doesn't seem to affect tracing whatsoever when I do so. For example, I've used the following settings in the web.config file and every request that hits the IIS server is sending tracing messages that are in turn picked up by Perfecto. (In the System.WebServer section) <httpTracing> <traceUrls> <add value="/Default.aspx" /> </traceUrls> </httpTracing> I then found that the applicationHost.config file on the server had an empty element. I tried removing this element, as well as the httpTracing element in the web.config. After a machine reboot, I was still getting tracing messages! My understanding is that the presense of the httpTracing element is what controlls whether ETW tracing is on or not. I ensured there was no reference to httpTracing in the machine.config, too. At a loss, I decided to remove the IIS Tracing feature with Server Manager. After a reboot, I no longer got ETW tracing. I then reinstalled IIS Tracing feature with Server Manager. As expected, the httpTracing element reappeared in the applicationhost.config file. Tracing messages began sending again for all sites and pages. I then tried to use the traceUrls element at the applicationhost.config level. This also didn't filter out and traces. I must be misunderstanting something key with how httpTracing works. There aren't many resources on the web to help me, either. Can anyone tell me if what I'm trying should work? Has anyone else had success filtering tracing message per page with traceUrls? I should note that I also tried changing with the following setting in applicationhost.config to "allow". It didn't seem to help. <section name="httpTracing" overrideModeDefault="Allow" />

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Wireless Network Performance Issues

    - by colithium
    My brand new Dell XPS system has been running flawlessly except its abysmal download speeds. I have tried isolating every variable I could possibly think of but I can't figure out the problem. I've talked to Dell and Belkin without making progress (thought I'd try). Here are the speeds: Note that most of the time, upload speeds are actually much faster than download speeds (around 4.0 Mb/s which is better than most other devices on the network) It's not the ISP. The slowdown happens even when transferring files inside the network. Plus every other wireless device gets approximately this: It's not the wireless router. It's a Lynksis WRT160N v1 with the latest firmware (1.02.2). Plus everything else connected to it has normal speeds. It's not the browser. Speeds are the same in IE, FF, and when transferring files with Windows between computers. It's not the wireless adapter. I've tried a Belkin N Wireless USB Adapter (which works fine on another computer) and a Dell Wireless Draft 802.11n WLAN Mini-Card. They have the same slow speeds when connected to the problem computer. It's not the adapter connection. One adapter used USB and the other is a Mini-Card. It's not antenna placement. With the same antenna position and the same device, I get different speeds when connected to the problem computer vs a good computer. Plus everything reports the connection speed as at least 11Mbps and good signal strength. I've tried disabling IPv6 since it sometimes causes weird problems. I've tried disabling Windows Firewall/anti-virus. I've ensured the computer has updated drivers for both adapters. I've ensured that Windows is up to date and so is the BIOS. For the USB adapter I ensured that that USB port functioned at normal speeds with other USB devices. What else could it possibly be? I finally received my copy of Windows 7 and will be trying that. I'd rather not install Windows 7 because of a particular program that will stop working so a solution besides that is welcome. Specs: Vista x64 Core i7 920 6GB RAM 500GB HD GTX 260

    Read the article

  • Chrome Problems on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites! Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). Is anybody else experiencing such issues? Does anybody has a solution to any of these? Any Help is appreciated! Thank You!

    Read the article

  • Pxe net install Centos with Static IP

    - by Stu2000
    I seem to be unable to perform a kickstart installation of centos5.8 with a netinstall. It correctly gets into the text installer, but keeps sending out a request for the dhcp server and failing. I have tried to manually set the IP everywhere. Here is my pxelinux.cfg file DEFAULT menu PROMPT 0 MENU TITLE Ubuntu MAAS TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL centos5.8-net kernel /images/centos5.8-net/vmlinuz MENU LABEL centos5.8-net append initrd=/images/centos5.8-net/initrd.img ip=192.168.1.163 netmask=255.255.255.0 hostname=client101 gateway=192.168.1.1 ksdevice=eth0 dns=8.8.8.8 ks=http://192.168.1.125/cblr/svc/op/ks/profile/centos5.8-net MENU end and here is my kickstart file: # Kickstart file for a very basic Centos 5.8 system # Assigns the server ip: 192.211.48.163 # DNS 8.8.8.8, 8.8.4.4 # London TZ install url --url http://mirror.centos.org/centos-5/5.8/os/i386 lang en_US.UTF-8 keyboard us network --device=eth0 --bootproto=static --ip=192.168.1.163 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname=client1-server --onboot=on rootpw --iscrypted $1$Snrd2bB6$CuD/07AX2r/lHgVTPZyAz/ firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc Europe/London bootloader --location=mbr --driveorder=xvda --append="console=xvc0" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work part /boot --fstype ext3 --size=100 --ondisk=xvda part pv.2 --size=0 --grow --ondisk=xvda volgroup VolGroup00 --pesize=32768 pv.2 logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=528 --grow --maxsize=1056 logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow %packages @base @core @dialup @editors @text-internet keyutils iscsi-initiator-utils trousers bridge-utils fipscheck device-mapper-multipath sgpio emacs Here is my dhcp file: ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; subnet 192.168.1.0 netmask 255.255.255.0 { host tower { hardware ethernet 50:E5:49:18:D5:C6; fixed-address 192.168.1.163; option routers 192.168.1.1; option domain-name-servers 8.8.8.8,8.8.4.4; option subnet-mask 255.255.255.0; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 192.168.1.125; } } Is it impossible to prevent it asking for a dynamic ip before trying to install from the net? Perhaps there is an error in of my files? My dhcp server is set to ignore client-updates, and is set to only works with one mac address whilst testing.

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • Cannot change default application association of certain file types

    - by H.B.
    After associating my MP3 files with MPlayer i can no longer change that association using the Choose default program... dialogue, the Always use this [...] Checkbox is always greyed out (Control Panel > Default Programs > Associate a file type or protocol with a program does not let me change it either). That also happened for MP4s but not for MKVs for example, and if i associate my MP3s with other applications like VLC it does not get blocked. I would really like to know why that is and if i can avoid this beforehand (thankfully i know ways to fix it afterwards already). Edit: Another obervation: The blocking programs (i managed to block it with an association to Visual Studio as well) do not appear in the Recommended Programs of the open-with-dialogue (And the explorer said: "The current program is not recommended, but i won't let you change it, ha!"). Edit: A screenshot as requested: As you can see on the top left (if you know the icon of MPlayer), the file is currently associated with MPlayer. Edit: Ways to fix it (Note: This question is not about fixing it) Using the Default Programs Control Panel > Default Programs > Set Default Programs, select WMP, Choose defaults for this program, check .mp3 This should reassociate the files with WMP and you can create a new association in the explorer. Using the registry (As always, keep your hands off it unless you know what you are doing or if you are fine with accidentally breaking your system) HKEY_CURRENT_USER > Software > Microsoft > Windows > CurrentVersion > Explorer > FileExts > .mp3 Here you could for example clean up the open-with-list, and the current default program seems to be saved here as well in the key UserChoice, there you can change the ProgId string to another application, you can associate it with WMP by entering WMP11.AssocFile.MP3 or just pick another application right away. You may need to mess with permissions on the key though, if you cannot change the ProgId value.

    Read the article

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >