Search Results

Search found 9667 results on 387 pages for 'hardware monitoring'.

Page 376/387 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • SmartOS reboots spontaneously

    - by Alex
    I run a SmartOS system on a Hetzner EX4S (Intel Core i7-2600, 32G RAM, 2x3Tb SATA HDD). There are six virtual machines on the host: [root@10-bf-48-7f-e7-03 ~]# vmadm list UUID TYPE RAM STATE ALIAS d2223467-bbe5-4b81-a9d1-439e9a66d43f KVM 512 running xxxx1 5f36358f-68fa-4351-b66f-830484b9a6ee KVM 1024 running xxxx2 d570e9ac-9eac-4e4f-8fda-2b1d721c8358 OS 1024 running xxxx3 ef88979e-fb7f-460c-bf56-905755e0a399 KVM 1024 running xxxx4 d8e06def-c9c9-4d17-b975-47dd4836f962 KVM 4096 running xxxx5 4b06fe88-db6e-4cf3-aadd-e1006ada7188 KVM 9216 running xxxx5 [root@10-bf-48-7f-e7-03 ~]# The host reboots several times a week with no crash dump in /var/crash and no messages in the /var/adm/messages log. Basically /var/adm/messages looks like there was a hard reset: 2012-11-23T08:54:43.210625+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:14:43.187589+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:34:43.165100+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T09:54:43.142065+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:14:43.119365+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:34:43.096351+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:54:43.073821+00:00 10-bf-48-7f-e7-03 rsyslogd: -- MARK -- 2012-11-23T10:57:55.610954+00:00 10-bf-48-7f-e7-03 genunix: [ID 540533 kern.notice] #015SunOS Release 5.11 Version joyent_20121018T224723Z 64-bit 2012-11-23T10:57:55.610962+00:00 10-bf-48-7f-e7-03 genunix: [ID 299592 kern.notice] Copyright (c) 2010-2012, Joyent Inc. All rights reserved. 2012-11-23T10:57:55.610967+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: lgpg 2012-11-23T10:57:55.610971+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: tsc 2012-11-23T10:57:55.610974+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: msr 2012-11-23T10:57:55.610978+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mtrr 2012-11-23T10:57:55.610981+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: pge 2012-11-23T10:57:55.610984+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: de 2012-11-23T10:57:55.610987+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: cmov 2012-11-23T10:57:55.610995+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mmx 2012-11-23T10:57:55.611000+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: mca 2012-11-23T10:57:55.611004+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: pae 2012-11-23T10:57:55.611008+00:00 10-bf-48-7f-e7-03 unix: [ID 223955 kern.info] x86_feature: cv8 The problem is that sometimes the host loses the network interface on reboot so we need to perform a manual hardware reset to bring it back. We do not have physical or virtual access to the server console - no KVM, no iLO or anything like this. So, the only way to debug is to analyze crash dumps/log files. I am not a SmartOS/Solaris expert so I am not sure how to proceed. Is there any equivalent of Linux netconsole for SmartOS? Can I just redirect the console output to the network port somehow? Maybe I am missing something obvious and crash information is located somewhere else.

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Ubuntu server loses exactly 5 minutes once in a while

    - by Harold Smith
    I noticed that my server, an Ubuntu server 12.04, was losing time. I figured the hardware clock was off or maybe dying due to a faulty CMOS battery. I installed NTP to ensure the drift would be corrected, but to no avail. During a day it would lose 20 minutes or so. To debug, I created a small cron job to check against a remote servers time, which I knew to be correct. The script calculates the difference in seconds between local and remote time. The result was interesting. It seems to be losing exactly 5 minutes several times during the day. Look at this log (difference from remote server noted in seconds): Tue Oct 23 03:30:02 CEST 2012: 284 Tue Oct 23 03:35:02 CEST 2012: 284 Tue Oct 23 03:40:01 CEST 2012: 285 Tue Oct 23 03:45:02 CEST 2012: 285 Tue Oct 23 03:50:02 CEST 2012: 285 Tue Oct 23 03:55:02 CEST 2012: 284 Tue Oct 23 04:00:02 CEST 2012: 284 Tue Oct 23 04:05:01 CEST 2012: 285 Tue Oct 23 04:10:01 CEST 2012: 285 Tue Oct 23 04:15:02 CEST 2012: 585 Tue Oct 23 04:20:02 CEST 2012: 584 Tue Oct 23 04:25:02 CEST 2012: 584 Tue Oct 23 04:30:02 CEST 2012: 584 Tue Oct 23 04:35:01 CEST 2012: 585 Tue Oct 23 04:40:01 CEST 2012: 585 Tue Oct 23 04:45:02 CEST 2012: 585 Tue Oct 23 04:50:02 CEST 2012: 584 Tue Oct 23 04:55:02 CEST 2012: 584 Tue Oct 23 05:00:02 CEST 2012: 584 Tue Oct 23 05:05:01 CEST 2012: 585 Tue Oct 23 05:10:01 CEST 2012: 585 Tue Oct 23 05:15:02 CEST 2012: 585 Tue Oct 23 05:20:02 CEST 2012: 584 Tue Oct 23 05:25:02 CEST 2012: 584 Tue Oct 23 05:30:02 CEST 2012: 584 Tue Oct 23 05:35:01 CEST 2012: 585 Tue Oct 23 05:40:01 CEST 2012: 585 Tue Oct 23 05:45:02 CEST 2012: 584 Tue Oct 23 05:50:02 CEST 2012: 584 Tue Oct 23 05:55:02 CEST 2012: 584 Tue Oct 23 06:00:02 CEST 2012: 584 Tue Oct 23 06:05:03 CEST 2012: 584 Tue Oct 23 06:10:02 CEST 2012: 584 Tue Oct 23 06:15:01 CEST 2012: 585 Tue Oct 23 06:20:02 CEST 2012: 584 Tue Oct 23 06:25:02 CEST 2012: 584 Tue Oct 23 06:30:02 CEST 2012: 584 Tue Oct 23 06:35:02 CEST 2012: 584 Tue Oct 23 06:40:02 CEST 2012: 584 Tue Oct 23 06:45:01 CEST 2012: 585 Tue Oct 23 06:50:02 CEST 2012: 584 Tue Oct 23 06:55:01 CEST 2012: 585 Tue Oct 23 07:00:02 CEST 2012: 584 Tue Oct 23 07:05:02 CEST 2012: 584 Tue Oct 23 07:10:02 CEST 2012: 584 Tue Oct 23 07:15:02 CEST 2012: 584 Tue Oct 23 07:20:02 CEST 2012: 584 Tue Oct 23 07:25:02 CEST 2012: 584 Tue Oct 23 07:30:01 CEST 2012: 585 Tue Oct 23 07:35:02 CEST 2012: 584 Tue Oct 23 07:40:02 CEST 2012: 584 Tue Oct 23 07:45:02 CEST 2012: 584 Tue Oct 23 07:50:02 CEST 2012: 584 Tue Oct 23 07:55:02 CEST 2012: 584 Tue Oct 23 08:00:01 CEST 2012: 585 Tue Oct 23 08:05:02 CEST 2012: 584 Tue Oct 23 08:10:02 CEST 2012: 584 Tue Oct 23 08:15:02 CEST 2012: 584 Tue Oct 23 08:20:02 CEST 2012: 584 Tue Oct 23 08:25:02 CEST 2012: 584 Tue Oct 23 08:30:01 CEST 2012: 585 Tue Oct 23 08:35:02 CEST 2012: 584 Tue Oct 23 08:40:02 CEST 2012: 584 Tue Oct 23 08:45:02 CEST 2012: 584 Tue Oct 23 08:50:02 CEST 2012: 584 Tue Oct 23 08:55:02 CEST 2012: 584 Tue Oct 23 09:00:02 CEST 2012: 584 Tue Oct 23 09:05:03 CEST 2012: 584 Tue Oct 23 09:10:02 CEST 2012: 584 Tue Oct 23 09:15:02 CEST 2012: 584 Tue Oct 23 09:20:02 CEST 2012: 584 Tue Oct 23 09:25:02 CEST 2012: 584 Tue Oct 23 09:30:01 CEST 2012: 584 Tue Oct 23 09:35:02 CEST 2012: 584 Tue Oct 23 09:40:02 CEST 2012: 584 Tue Oct 23 09:45:02 CEST 2012: 584 Tue Oct 23 09:50:02 CEST 2012: 584 Tue Oct 23 09:55:02 CEST 2012: 584 Tue Oct 23 10:00:01 CEST 2012: 584 Tue Oct 23 10:05:02 CEST 2012: 584 Tue Oct 23 10:10:07 CEST 2012: 584 Tue Oct 23 10:15:02 CEST 2012: 584 Tue Oct 23 10:20:02 CEST 2012: 884 Tue Oct 23 10:25:02 CEST 2012: 884 Tue Oct 23 10:30:02 CEST 2012: 883 Tue Oct 23 10:35:01 CEST 2012: 884 Tue Oct 23 10:40:02 CEST 2012: 884 Tue Oct 23 10:45:02 CEST 2012: 884 Tue Oct 23 10:50:02 CEST 2012: 884 Tue Oct 23 10:55:02 CEST 2012: 1184 Tue Oct 23 11:00:02 CEST 2012: 1183 Tue Oct 23 11:05:01 CEST 2012: 1184 Tue Oct 23 11:10:02 CEST 2012: 1184 Tue Oct 23 11:15:02 CEST 2012: 1184 Tue Oct 23 11:20:02 CEST 2012: 1184 This does not seem to be faulty CMOS battery in my opinion. But what do you think?

    Read the article

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • What seems to be plaguing my hard drives?

    - by Craig
    In a little bit of a tech nightmare here. I know oodles about software, not so much more than the above average user about hardware. I recently had to toss an old desktop of mine. It was gradually getting slower and slower, and after shutting the desktop down for long periods of time, it would choke up upon startup. Sometimes it'd give a disk read error, sometimes say no OS was found, etc. Restarting it about 5-15 times would eventually boot properly. Weird. I also noticed that startup programs were going missing, Dropbox was reindexing my entire folder, and Backblaze was backing up less than the number of files that it should. This lead me to believe it was probably a hard drive issue. I began to wonder why I'd have hard drive issues, and came to closure when I assumed it was because of recent power surges and outages. I'm sure that does a number on the drive. I bought a new desktop recently. It's not a beast or anything, but it's enough for what I do. It's an eMachines (I know, I know) Ultra-Slim (http://www.amazon.com/eMachines-Ultra-Slim-ER1401-57-Desktop-PC/dp/B00475OG9U). This is ideal for me because it's small and portable. It comes with an AC adapter and battery, like a laptop. Just to be safe, I bought an uninterruptable power supply on top of that. It's basically protected completely from any outages that might scramble the drive. I set this up a few days ago and for the past few days I've been perfecting settings, downloading the usual applications, etc. Two days ago, I noticed Dropbox was reindexing my entire Dropbox folder. I installed both Dropbox and Backblaze on this system, but it is very much more lightweight than the other. Only about 15 third-party applications installed. I thought that maybe Dropbox and Backblaze were stressing my system, so I turned off Backblaze. Still, Dropbox runs and comes across this infinite reindex issue. I noticed that upon a reboot recently, two applications did not start on startup either. Also, much like my old desktop, every 3rd or 4th reboot, I'll be forced into a chkdsk. This makes me incredibly nervous. What could possibly be going wrong with my old, years-old desktop that is immediately causing the same issues to my new one? I've considered all of the basics. I'm in a very air conditioned room. I take run routine virus scans. I'd like to think I take care of my systems very well. What is this issue that is haunting me? There's always the possibility that this new desktop has a junk hard drive, but it just seems way too coincidental.

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • BSOD Code 16, artifacts all over the place (gtx 260)

    - by belinea
    I have a following, quite dated rig E8400 Core 2 Duo cpu Intel Dragontail Peak DP35DP motherboard on Intel Bearlake P35 chipset 4GB ram Geforce 260gtx Corsair 650W PSU Windows 7 64 bit The following things has happened in the last few days. I first decided to update my Nvidia drivers to the latest version. That was 4 days ago. PC worked fine for 2 days and I was able to play few games as well without any problems. Then 2 days ago a first crash happened while playing the new XCOM game. BSOD code 16. Just the blue screen, no artifacts. PC rebooted and worked well again, I continued playing this game for another 2 hours and went to sleep. Next evening I tried to play some BF3 multiplayer (use to play on LOW settings). Approx. 10 minutes into the game red/pink-ish artifacts appeared on the screen and game quit to desktop. Restarted the game and another 3-4 minutes afterwards another crash to desktop but this time followed shortly by BSOD Code 16. From that moment I started to seeing artifacts on random startups, including Windows loading screen and the BIOS itself. Windows would still load but soon enough it would BSOD on a simple task like opening a Internet browser. Today I get tons of artifacts (little small red dashes all over the screen) on BIOS, loading screen, normal Windows mode as well as safe mode. I suspect it wouldn't be drivers but I tried removing and sweeping them entirely in the safe mode. PC would still start with artifacts all over the place but would load the normal mode, just without the driver, in the default lowest resolution. As soon as proper Nvidia drivers are installed though and PC rebooted, Windows doesn't load at all as BSOD now appears on loading screen. However, again, if I go to safe mode and remove drivers, normal mode launches fine. So obviously crash happens only on high resolution. I opened my machine this morning and gave it a proper cleaning even though it wasn't heavily dusted. It didn't help and number of artifacts seems to increase with every PC restart. I write these words in safe mode, which works, but I have to look through all the red dots and dashes. I don't have built in GPU chipset so I can't try removing my Geforce card nor can I borrow A GPU from anyone else. What are my options? I was looking into getting a completely new rig around Christmas so I'm not freaking out about this. If everything points to hardware issue I may simply decide to get the new machine earlier and don't bother with fixing this one. However it would be great to learn more if this is indeed situation that has slim chances of getting sorted. I realize BSOD Code 16 is rather popular topic online but every story seems a bit different and there can be number of issues with it. Hence a new thread.

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • Windows 8.1 Update 1 Disk Usage 100%

    - by Gookjin Jeong
    Background Information / Computer Specs I have a 14-inch Samsung Series 5 Ultra. Core i5 CPU, 750GB HDD, 8GB RAM, Intel HD Graphics 4000. I've had the computer for about 1.5 years with no major problems. Problem The issue appeared at the beginning of April this year, when I updated the OS to Windows 8.1 Update 1 (not from 8 to 8.1). After being on continually (except for at night, when I put it on sleep mode) for about 48 hours, the disk usage as seen by Task Manager hits 100%. When this happens, everything from opening/closing applications to typing and even bringing up the start screen by pressing the Windows key becomes extremely slow. The only way to make the disk usage decrease is to restart the computer. Then the problem repeats. I've used my current laptop (as well as my previous laptops) this way -- putting it on sleep mode at night and restarting it only when Windows needs to install updates -- for a long time. So I know the 100% disk usage is not due to the way I use the computer. The thing that causes the spike varies. Sometimes it's System, sometimes it's one of the various applications I installed (e.g. Chrome, Evernote, Spotify, Wunderlist, iTunes, etc.), and sometimes it's Antimalware Service Executable, etc. Tried Solutions I think I tried almost every solution out there for this problem: Running the check disk command (chkdsk /b /f /v /scan c:) from Admin Command Prompt Running Windows Memory Diagnostic Disabling Superfetch and Windows Search from services.msc Running "Fix problems with Windows Update" from Control Panel -- Troubleshooting Updating and rolling back the graphics driver (Intel HD 4000) Disabling "Use hardware acceleration when available" from Chrome settings Disabling Intel Rapid Storage Technology Running the SFC /SCANNOW command as recommended here Running a quick scan & a full scan from Windows Defender (no threats found) Taking the hard drive out and putting it back Refreshing the computer, from the Update and recovery -- Recovery option in Windows settings NONE of the above worked for me. I was about to give up but then noticed that one of the main culprits of the disk usage spike, as shown in the "Disk Activity" section of the Resource Monitor, was C:\System (pagefile.sys). I googled around and found that one of the recommended solutions was to disable pagefile. I then went to **Control Panel -- System and Security -- System -- Advanced system settings -- Advanced tab -- Performance settings -- Advanced tab -- "Change" under Virtual memory and discovered that the number for "Currently allocated" at the bottom was 1280MB, although the number for "Recommended" was 4533MB. I immediately changed it to 4533MB and checked my family members' computers to see what the numbers were like. All of theirs had a currently allocated space that was only slightly smaller than the recommended space. See screenshot below: This might fix the problem. I'll have to wait a couple more days.But if it doesn't, what in the world should I do next? I'm guessing the hard drive isn't failing because This computer is less than 2 years old; and Speccy says that the status of the HDD is good. Update 5/27/2014 The "4533MB" solution did not work. I had to reboot the computer about 30 minutes ago because the disk usage again hit 100%. When I opened Resource Monitor the C:\System (pagefile.sys) again was shown to be the culprit. I have now disabled pagefile entirely via the same window shown above in the screenshot. The number for "currently allocated" is now 0MB. Will update again in a couple days, or if the problem occurs again, whichever comes sooner.

    Read the article

  • WNDR3700 Router + Cisco SG200-08 + LACP + Dual Uplink

    - by kobaltz
    Background I have a storage server that has several virtual machine images stored on them. I would store them locally, but I have limited space on my desktop (using SSD storage). I would like to increase the bandwidth between the desktop and the storage server by using two NICs on each computer. My original configuration allowed about 55MBps between the desktop and storage server. This storage server also has several TBs of documents, pictures, movies, vms, and ISO/programs. The storage server has 8 1.5TB hard drives in a RAID 10 configuration with a hardware RAID controller. The benchmarks on the RAID 10 are about 300MBps. Configuration In short, I am trying to bridge my switch and router. The switch is a small 8 port Cisco smart switch that supports 802.3ad LACP. I have two computers plugged into the switch, each with 2 Intel Gigabit NICs. The first computer is a Windows 7 machine that has the Intel ANS software installed. I have LACP configured with the computer and now show 3 NICs (2 Physical + 1 TEAM Virtual @ 2Gbps). It looks like this computer is configured correctly. I trunked the two ports that this computer is plugged into with the switch's web interface. The second computer is a homebrew storage box running debian. I also have the bonding enabled on this machine and the switch configured with LACP. Without having the WNDR3700 router in the picture yet, I am able to communicate between the Windows 7 machine and the debian box since they both have static IP addresses. With LACP enabled on both machines I am getting about 106-108MBps speeds. Issue I plug in a network cable from the switch into the router and enable DHCP on the desktop. I saw no need to have a static address on the desktop. My transfer rates are still from 106MBps-108MBps. While this is still a boost, I am trying to figure out how to get about 140-180MBps. I am thinking that I need to increase the bandwidth from the router to the switch. My switch allows 4 groups for port trunking. I plugged in a second network cable from the router to the switch. My question is, what is the proper way to fix this issue. Should I port trunk the two ports that are going from the switch to the router? Keep in mind that the router is a WNDR3700 and is unsure whether or not it supports LACP. I do have OpenWRT installed on the router, but it still wasn't clear in any documentation that I found if it supported 802.3ad LACP standards. I am also wondering if there needs to be anything changed within the Cisco settings. [Edit] - Corrected some numbers, wasn't really paying attention. It looks like the speeds though at least two NICs are bonded with LACP is still reaching the max bandwidth of one port. Is there a way to configure the switch so that I can increase this bandwidth? Also, on the storage server, I had a couple of extra NICs laying around and threw them on there as well. Another EDIT and More Findings I happened to look at the traffic of each individual NIC and think that I see the problem. I tested with a simple transfer for a 4GB file. I noticed that only one of the NICs was taking the load of the traffic. I then copied the file back to the Storage Server and noticed that the other NIC was sending out the traffic. I have 802.3ad LACP enabled on the two NICs and I see that it gets enabled dynamically on the switch's interface. Should I be using Static Link Aggregation?

    Read the article

  • "ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver" wheel-click is wrong

    - by sputnick
    I use this mouse under archlinux x86_64 with 3.2.8-1-ARCH kernel. I have some problems to select and then paste with the wheel-click in some applications like konversation, not in a terminal nor an editor. I don't know if it's a hardware problem or a software one. $ lsusb -v Bus 002 Device 110: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc50e Cordless Mouse Receiver bcdDevice 25.10 iManufacturer 1 Logitech iProduct 2 USB RECEIVER iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 70mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.11 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 95 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) When I see what's happens in xev, the output is different compared to another mouse My buggy Logitech mouse : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350700, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350700, (48,52), root:(1491,75), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 16 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350716, (48,52), root:(1491,75), state 0x10, button 6, same_screen YES ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170350988, (48,52), root:(1491,75), state 0x10, button 11, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170350988, (48,52), root:(1491,75), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 a working mouse (dell) : ButtonPress event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245131, (46,32), root:(1489,55), state 0x10, button 2, same_screen YES EnterNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245131, (46,32), root:(1489,55), mode NotifyGrab, detail NotifyInferior, same_screen YES, focus YES, state 528 KeymapNotify event, serial 40, synthetic NO, window 0x0, keys: 90 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ButtonRelease event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x4400002, time 170245411, (46,32), root:(1489,55), state 0x210, button 2, same_screen YES LeaveNotify event, serial 40, synthetic NO, window 0x4400001, root 0x15a, subw 0x0, time 170245411, (46,32), root:(1489,55), mode NotifyUngrab, detail NotifyInferior, same_screen YES, focus YES, state 16 A demo of the problem when I use konversation (IRC) : http://www.youtube.com/watch?v=lhmr92M7NCc I tried to modify the button map with xmodmap like this with no success (one at a time) : xmodmap -e "pointer = 1 0 3" xmodmap -e "pointer = 1 1 3" xmodmap -e "pointer = 1 2 3" xmodmap -e "pointer = 1 3 3" xmodmap -e "pointer = 1 4 3" xmodmap -e "pointer = 1 5 3" xmodmap -e "pointer = 1 6 3" xmodmap -e "pointer = 1 7 3" xmodmap -e "pointer = 1 8 3" xmodmap -e "pointer = 1 9 3" xmodmap -e "pointer = 1 10 3" xmodmap -e "pointer = 1 11 3" xmodmap -e "pointer = 1 12 3" xmodmap -e "pointer = 1 13 3" xmodmap -e "pointer = 1 14 3" xmodmap -e "pointer = 1 15 3" xmodmap -e "pointer = 1 16 3" xmodmap -e "pointer = 1 17 3" xmodmap -e "pointer = 1 18 3" xmodmap -e "pointer = 1 19 3" xmodmap -e "pointer = 1 20 3" xmodmap -e "pointer = 1 21 3" xmodmap -e "pointer = 1 22 3" xmodmap -e "pointer = 1 23 3" xmodmap -e "pointer = 1 24 3" xmodmap -e "pointer = 1 25 3" Any clue ? I would like to avoid buying a new mouse just for a paste problem.

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Hard drive after PCB swap strange stuff

    - by ramyy
    I’ve done a PCB swap to my HDD. The HDD model is: WD6400AAKS-00A7B2. The original PCB PN matches the new one (first three letter groups), though the cache mismatches (16MB original, 8MB new). The Hardware store that made the swap told me it was hard to do the swap, they have done firmware adaptation. I can see that this firmware version does not match the original, (01.03B01 original, 05.04E05 new). Still I can see that the serial number and model of the drive is correct, the hard drive appeared normal in the BIOS, all the partitions show and everything appears normal. I have encountered three things though, I have left the drive non operated for 2-3 weeks after the swap to avoid corrupting the data or anything else the new PCB might cause, until I buy a new drive and backup the data. I got a drive, and when I powered the old drive manually (I have a laptop, I use a normal desktop power supply and a USB/SATA connector), I heard the motor start and I could hear ticking as if the motor’s somehow struggling to start, and then the motor sound starts again then the ticking, and so on.. I tried powering again it happened again. The third time it started normally and I could see everything normally. I took the chance and copied all the data over to the new drive. When I was done, I powered off the drive (after more than 25 hours of continuous operation), tried to power it up again and it did so normally, and so are the times I powered it up later; but I got very suspicious now. What could be the problem here? And what happened new, it used to power normally after the swap directly? The second thing that happened is that I found size differences with some files; some include movies, songs, (.iso) files for games, and programs. I could find the size is the same, but size on disk is a little more on the new drive for these files. . I’ve tried some of those files (with size differences) they worked fine. They are not too much but still make you suspicious of the integrity of the data copied; one cannot try if all files are working for about (580 GB) worth of data. I tried copying these files on the same partition they exist of the old drive; they are the same in size as when copied to the new drive (allocation unit size not the issue). I took an image of a partition (sector by sector including empty ones) and when I explore it, these file sizes are equal to the original (old drive); I copy them anywhere else their size on disk, increases, i.e becomes equal to the ones I copy from the old drive itself anywhere. Why the size difference and can one trust the integrity of the data?? The third thing is that when I connect my new external USB HDD, the partitions of the old HDD unmount and then mount again. Connected are: (USB mouse + Old HDD) then external HDD. Why that happens?? Considering the following: I compared the SMART reports from after the swap directly and after the copying, no error readings or reallocated sectors where reported. Here they are: http://www.image-share.com/ijpg-1939-219.html I later ran both WD data life guard tests and they came out passed. I’m worried for this drive since I must be sure the data is fine and safe on the new one, and I will consider it backup for the new one, since you can’t trust anything anymore. I hope you can forgive me for the length of the post, but couldn’t ignore any of the details, this hard drive contains very important data to me and I have to deal with the situation with great care.

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >