Search Results

Search found 17437 results on 698 pages for 'nick long'.

Page 608/698 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • SQL Server Licensing in a VMware vSphere Cluster

    - by Helvick
    If I have SQL Server 2008 instances running in virtual machines on a VMware vSphere cluster with vMotion\DRS enabled so that the VM's can (potentially) run on any one of the physical servers in the cluster what precisely are the license requirements? For example assume that I have 4 physical ESX Hosts with dual physical CPU's and 3 separate single vCPU Virtual Machines running SQL Server 2008 running in that cluster. How many SQL Standard Processor licenses would I need? Is it 3 (one per VM) or 12 (one per VM on each physical host) or something else? How many SQL Enterprise Processor licenses would I need? Is it 3 (one per VM) or 8 (one for each physical CPU in the cluster) or, again, something else? The range in the list prices for these options goes from $17k to $200k so getting it right is quite important. Bonus question: If I choose the Server+CAL licensing model do I need to buy multiple Server instance licenses for each of the ESX hosts (so 12 copies of the SQL Server Standard server license so that there are enough licenses on each host to run all VM's) or again can I just license the VM and what difference would using Enterprise per server licensing make? Edited to Add Having spent some time reading the SQL 2008 Licensing Guide (63 Pages! Includes Maps!*) I've come across this: • Under the Server/CAL model, you may run unlimited instances of SQL Server 2008 Enterprise within the server farm, and move those instances freely, as long as those instances are not running on more servers than the number of licenses assigned to the server farm. • Under the Per Processor model, you effectively count the greatest number of physical processors that may support running instances of SQL Server 2008 Enterprise at any one time across the server farm and assign that number of Processor licenses And earlier: ..For SQL Server, these rule changes apply to SQL Server 2008 Enterprise only. By my reading this means that for my 3 VM's I only need 3 SQL 2008 Enterprise Processor Licenses or one copy of Server Enterprise + CALs for the cluster. By implication it means that I have to license all processors if I choose SQL 2008 Standard Processor licensing or that I have to buy a copy of SQL Server 2008 Standard for each ESX host if I choose to use CALs. *There is a map to demonstrate that a Server Farm cannot extend across an area broader than 3 timezones unless it's in the European Free Trade Area, I wasn't expecting that when I started reading it.

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • How can I run supervisord without using root?

    - by Jason Baker
    I seem to be having trouble figuring out why supervisord won't run as a non-root user. If I start it with the user set to jason (pid 1000), I get the following in the log file: 2010-05-24 08:53:32,143 CRIT Set uid to user 1000 2010-05-24 08:53:32,143 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:53:32,189 INFO RPC interface 'supervisor' initialized 2010-05-24 08:53:32,189 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:53:32,189 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:53:32,190 INFO daemonizing the supervisord process 2010-05-24 08:53:32,191 INFO supervisord started with pid 3444 ...then the process dies for some unknown reason. If I start it without sudo (under the user jason), I get similar output: 2010-05-24 08:51:32,859 INFO supervisord started with pid 3306 2010-05-24 08:52:15,761 CRIT Can't drop privilege as nonroot user 2010-05-24 08:52:15,761 WARN Included extra file "/home/jason/src/tsched/celeryd.conf" during parsing 2010-05-24 08:52:15,807 INFO RPC interface 'supervisor' initialized 2010-05-24 08:52:15,807 WARN cElementTree not installed, using slower XML parser for XML-RPC 2010-05-24 08:52:15,807 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2010-05-24 08:52:15,808 INFO daemonizing the supervisord process 2010-05-24 08:52:15,809 INFO supervisord started with pid 3397 ...and it still doesn't run. If it's any help, here's the supervisord.conf file I'm using: [unix_http_server] file=/tmp/supervisor.sock ; path to your socket file [supervisord] logfile=./supervisord.log ; supervisord log file logfile_maxbytes=50MB ; maximum size of logfile before rotation logfile_backups=10 ; number of backed up logfiles loglevel=debug ; info, debug, warn, trace pidfile=./supervisord.pid ; pidfile location nodaemon=false ; run supervisord as a daemon minfds=1024 ; number of startup file descriptors minprocs=200 ; number of process descriptors user=jason ; default user childlogdir=./supervisord/ ; where child log files will live [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use unix:// schem for a unix sockets. [include] # Uncomment this line for celeryd for Python files=celeryd.conf # Uncomment this line for celeryd for Django. ;files=django/celeryd.conf ...and here's celeryd.conf: [program:celery] command=bin/celeryd --loglevel=INFO --logfile=./celeryd.log environment=PYTHONPATH='./tsched_worker', JIVA_DB_PLATFORM='oracle', ORACLE_HOME='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server', LD_LIBRARY_PATH='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib', TNS_ADMIN='/home/jason', CELERY_CONFIG_MODULE='tsched_worker.celeryconfig' directory=. user=jason numprocs=1 stdout_logfile=/var/log/celeryd.log stderr_logfile=/var/log/celeryd.log autostart=true autorestart=true startsecs=10 ; Need to wait for currently executing tasks to finish at shutdown. ; Increase this if you have very long running tasks. stopwaitsecs = 600 ; if rabbitmq is supervised, set its priority higher ; so it starts first priority=998 Can anyone help me figure out what's going on?

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • Automating video generation by adding an intro and a trailing video to the main video

    - by DevDewboy
    I have a video project I am trying to compile. Here is the overview: I have many videos which are 5 minute training sessions - Main video. The Intro Video will be a standard 5 second video that will have the Video title and Author. This will be concatenated to the main video. The Trailing Video will pretty much be a stock video that will be concatenated to the main video and have all the legaleze etc. The Intro Vid will smoothly fade into the main vid as well as when you get to end of the main video it will fade into the Trailing video nicely. The product is a new video with a Intro, Main & Trailer video all in one! The concept is really that simple. In fact I found an example of a person who has solved this and is doing exactly what I want. This solution is a Bash script that takes a config file that has the title, author, etc. and generates the Intro, the Ending and creates the resulting video with them concatenated. I am using Ubuntu 12.04 Server. I have been trying to take this as a sample and just running it with no luck because of incompatibility errors. I even attempted to convert it using .MP4 containers or .MKV. I am running into error after error or incompatibility issues. I went as far as changing out the ffmpeg binary using the 25 Oct 2013 version from http://ffmpeg.gusari.org/static/64bit/ which I like as I don't have to worry about rebuilding the binary. Almost successful but again I have some error which I cannot solve. I know part of the problem is the fact that video production, codecs, formats is a completely new field for me so I am attempting to work through this new territory. Perhaps an expert here has something similar that I can use as a guideline that uses MP4 or h.264 format. Or take the solution above from the URL and make it work with a more up-to-date version of ffmpeg. I will include the script and its parameter file and the output (abbreviated because of limitation) below. Basically as the script stands right now, when run I get the error [matroska,webm @ 0x27bbee0] Read error. This error is return from the 'reasembleVideo' routine from the first ffmpeg command. The following is the Parameter File: #!/bin/bash INPUTFILE="ssh_main.mp4" LOGO="logo.png" LOGOLENGTH="1" SPEAKER="Jason" TITLE="Basic SSH Video" DATE="October 28, 2013" SCENESTART="00:00:01" SCENEDURATION="00:00:09" OUTPUTFILE="ssh_basic_1" } The following is the script I am running. The ${OUTPUTFILE} being used is a small 2 minute video I create in screen-o-matic in MP4 format. Script on PasteBin (too long for Super User post)

    Read the article

  • Disk is spinning down each minute, unable to disable it

    - by lzap
    I played with spindown and APM settings of my Samsung discs and now they spin down every minute. I want to disable it, but it seems it does not accept any of the spindown time or APM values. Nothing works, it's all the same. Please help what values should be proper for it. I do not want it to spin down at all. /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG HD154UI Serial Number: S1Y6J1KZ206527 Firmware Revision: 1AG01118 Standards: Used: ATA-8-ACS revision 3b Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2930277168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 1430799 MBytes device size with M = 1000*1000: 1500301 MBytes (1500 GB) cache/buffer size = unknown Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * 64-bit World wide name * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 326min for SECURITY ERASE UNIT. 326min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50024e900300cca3 NAA : 5 IEEE OUI : 0024e9 Unique ID : 00300cca3 Checksum: correct I have the very same disc which I did not "tuned" and it does not spin. But I do not know where to read the settings from. The hdparm only shows this: Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 Edit: It seems the issue was tuned daemon in RHEL6. It was too aggressive, I turned off disc tuning and it seems they are no longer spinning down.

    Read the article

  • How to set up a mini call center?

    - by Ralph
    I'm trying to figure out how to set up a miniature call center for a small business. Like, for 1-4 people to take calls, but hopefully expandable to more. We want to accept nation-wide calls, and then I guess distribute the calls among the available agents. If no one is available, I guess it should either put them in a queue and play some annoying music for them, or forward to call to an agent who has least-recently taken a call who can then quickly answer and say "please hold" until they're done their call. We want to have one phone number that customers can call. I guess we then need some kind of ACD system which would take each call and forward it to an agent based on some algorithm? Then we would need to purchase a separate phone line for each agent, plus one just for the distributor? Or do we need several "extra" lines to maintain a queue (one for each customer waiting too)? This "ACD" thing, is it just a device that you would plug a phone line into (or several?), and maybe connect to a computer, aided by some software? Or is a subscription thing that I would need from my local telephone provider? Next, the business we're running, the callers will be repeat customers. It would be helpful to automatically pull up their profile based on the incoming number. The "software" our agents will be running will just be a website where they can log in (preferably from home) and then enter some information they would obtain through the call. So, the system would have to somehow interface with the website if possible. If not, we'll just have to ask each customer for an identifier (phone number, username, customer number, or something). Is this possible? I guess each computer would need a device that the call would pass through, and then if I can somehow hook into that, then I can write some software that will interface with the site. So, where do I start? What hardware do we need to buy? What subscriptions do we need? We were thinking this magicJack might help us in accepting long-distance calls for cheaper, but my understanding is that they provide you with some weird-looking number, is there a way we could "mask" it with our toll-free number? And then pass the incoming calls through the distributor system, which would then get passed to the call-accepting device which would both allow an agent to answer the call and have a software hook? (I realize this might be partially out of the scope of SU, but I wasn't sure where else to ask. It is about computer(-aided) hardware and software anyway.) P.S.: I don't need any of that "press 1 to talk to..." or "say xyz to..." junk. Just a straight-forward, connect-to-next-available-agent system.

    Read the article

  • Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Edit: Rephrasing my question: Upon further troubleshooting, I can conclude that: Touch gestures (dragging, pinch to zoom, touch-and-hold right click) in Internet Explorer start to work when: The system has been running for ~2 minutes. This coincides with the delayed start of services. Explorer.exe is being run, then killed. I assume Explorer.exe starts some services? The services with delayed start are as follows: Security Center Software Protection Windows Defender, Search and Update Windows Font Cache Service Microsoft .NET Framework NGEN v4.0.30319_X64 and X86 I see no connection between these services and touch gestures, but just in case, I manually tried starting these services, but without luck. What else happens delayed after system boot, which also happens when explorer is started? Old question: Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly?

    Read the article

  • postfix cannot getting my domain name?

    - by Kossel
    Hi I'm trying to setup webmin+postfix+dovecot+roundcube, for this moment I want things be as simple as possible so I'm using linux users as email accounts. I can send/receive from the same domain, I mean [email protected] can send/receive to/from [email protected] I tested smtp/imap with outlook and says no problem. if I send a mail from gmail it reject with error of: Technical details of temporary failure: The recipient server did not accept our requests to connect. when I login with roundcube the email address display in the right corner is something like user1@com and I get this error message from logs: [11-Nov-2012 07:39:03 +0400]: IMAP Error: Login failed for user1 from 187.150.xx.xx. Could not connect to com:143: php_network_getaddresses: getaddrinfo failed: Name or service not known in /var/www/webmail/program/include/rcube_imap.php on line 191 (POST /webmail/?_task=login&_action=login) it says Could not connect to com:143 looks like it cannot read the domain name. I used http://mxtoolbox.com/ to check the mx record and it says it can find the server of mail.mydomain.com. I quit sure the problema is from postfix or my server configs, but I have been looking for every config file and cannot find the answer of this. any suggestion I will appreciate. here are some of my configs (I don't want to make this question too long, I can provide any other information to solve this question): postfix main.cf #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_security_options = noanonymous smtpd_sasl_auth_enable = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = mydomain.com myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $mydomain, $myhostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + virtual_alias_domains = mydomain.com smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated myorigin = $mydomain roundcube conf // ---------------------------------- // IMAP // ---------------------------------- $rcmail_config['default_host'] = '%d'; $rcmail_config['default_port'] = 143; $rcmail_config['imap_auth_type'] = null; $rcmail_config['imap_delimiter'] = null; $rcmail_config['imap_ns_personal'] = null; $rcmail_config['imap_ns_other'] = null; $rcmail_config['imap_ns_shared'] = null; $rcmail_config['imap_force_caps'] = false; $rcmail_config['imap_force_lsub'] = false; $rcmail_config['imap_force_ns'] = false; $rcmail_config['imap_timeout'] = 0; $rcmail_config['imap_auth_cid'] = null; $rcmail_config['imap_auth_pw'] = null; $rcmail_config['imap_cache'] = null; $rcmail_config['messages_cache'] = false;

    Read the article

  • Accidentally Removed Permissions for the XP SP3 Registry key HKEY_CLASSES_ROOT! Workaround Please!

    - by Praveen Kumar
    This is Praveen and I am using Microsoft Windows XP SP3 Build 2600. I had problems with using Microsoft Office 2010. It was keeping on saying, "Please wait while windows configures Microsoft Office Professional Plus 2010". After seeing this link: http://social.answers.microsoft.com/Forums/en-US/outlookcontact/thread/6e9c3f2e-010a-4b74-b433-0c41548ee468?prof=required I thought of giving the Registry Key: HKEY_CLASSES_ROOT, full access permissions for Everyone. I went to the registry, right-clicked on HKEY_CLASSES_ROOT and clicked on Permissions. I added Everyone and gave Full Control to the ACL and clicked on Apply. Also I checked "Replace permission entries on all child objects with entries shown here that apply to child objects." After a long time, it said with an error, cannot replace for few entries. Now, the key, HKEY_CLASSES_ROOT has no access to any user. My system is not starting up. Some of my friends asked me to try the Last Known Good Configuration. Even that did not work out. When I tried to open in the Safe Mode, I got only one driver to load and even that didn't load well. Another friend suggested me to put the Setup disk and reinstall the OS. I tried that and after completing the "Installing Device Drivers" part, when it started "Installing Network", the system is getting restarted. Now, the setup is also half way through and even if I open in Safe Mode to try System Restore, it popped up a message saying, "Setup cannot run under Safe Mode. Setup will restart now." and the system is restarting. All my official files and my software, which I developed for Registry Security, resides in my system now. I am unable to access the system and I want it to be working to submit the project, as the deadline is this week. I had no better solutions from elsewhere. Can anyone please help me out with this issue. If it is possible to open the registry editor's stored file from another system and restore the access permissions, I hope it would solve the problem. Please do help me. Thanking You, Praveen.

    Read the article

  • Why is Excel 2010/2013 taking 10 seconds open any file?

    - by jbkly
    I have a fast Windows 7 PC with two SSDs and 16GB of RAM, so I'm used to programs loading very fast. But recently, for no reason I can figure out, Excel has started taking way too long to open Excel files (of any size--even blank files). This is occurring with Excel 2010 and with Excel 2013 after I upgraded, hoping to solve the problem. Here a couple scenarios: If I start Excel directly, it opens almost instantly. No problem there. If I start Excel directly, and then open any Excel file (.xls or .xlsx), it loads almost instantly. Still no problem BUT if I attempt to open any Excel file directly, with Excel not running, it consistently takes 10-11 seconds for Excel to start. I get no error messages, just a spinning cursor for 10-11 seconds, and then the file opens. During the delay while Excel is trying to start, I'm not really seeing any discernible spike in CPU or memory usage, other than explorer.exe. This problem is only occurring with Excel, not Word or any other program I'm aware of. I've searched around quite a bit on this question and found various others who have experienced it, but the solutions that worked for them are not working for me. For a few people it was a problem with scanning network drives, but my problem is purely with local files; I have no network drives, and the problem persists even with all network connections disabled. Some people suggested worksheets with corrupted formulas or links, but I'm experiencing this with ANY Excel file: even blank worksheets. Others thought it was a problem with add-ins, but I have all Excel add-ins disabled (as far as I can tell). One person solved it by disabling a "clipboard manager" process that was running in the background, but I don't have that. I've disabled as many startup and background processes as I can, but the problem persists. I've run malware scans, disk cleanup, CCleaner, and installed Excel 2013. I've deleted temporary files, enabled SuperFetch, and edited registry keys. Still can't get rid of the problem. Any ideas? My system details: Windows 7 Professional SP1 64-bit, Excel 2013 32-bit, 16GB RAM, all programs installed on SSD.

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • openstack, bridging, netfilter and dnat

    - by Craig Sanders
    In a recent upgrade (from Openstack Diablo on Ubuntu Lucid to Openstack Essex on Ubuntu Precise), we found that DNS packets were frequently (almost always) dropped on the bridge interface (br100). For our compute-node hosts, that's a Mellanox MT26428 using the mlx4_en driver module. We've found two workarounds for this: Use an old lucid kernel (e.g. 2.6.32-41-generic). This causes other problems, in particular the lack of cgroups and the old version of the kvm and kvm_amd modules (we suspect the kvm module version is the source of a bug we're seeing where occasionally a VM will use 100% CPU). We've been running with this for the last few months, but can't stay here forever. With the newer Ubuntu Precise kernels (3.2.x), we've found that if we use sysctl to disable netfilter on bridge (see sysctl settings below) that DNS started working perfectly again. We thought this was the solution to our problem until we realised that turning off netfilter on the bridge interface will, of course, mean that the DNAT rule to redirect VM requests for the nova-api-metadata server (i.e. redirect packets destined for 169.254.169.254:80 to compute-node's-IP:8775) will be completely bypassed. Long-story short: with 3.x kernels, we can have reliable networking and broken metadata service or we can have broken networking and a metadata service that would work fine if there were any VMs to service. We haven't yet found a way to have both. Anyone seen this problem or anything like it before? got a fix? or a pointer in the right direction? Our suspicion is that it's specific to the Mellanox driver, but we're not sure of that (we've tried several different versions of the mlx4_en driver, starting with the version built-in to the 3.2.x kernels all the way up to the latest 1.5.8.3 driver from the mellanox web site. The mlx4_en driver in the 3.5.x kernel from Quantal doesn't work at all) BTW, our compute nodes have supermicro H8DGT motherboards with built-in mellanox NIC: 02:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] (rev b0) we're not using the other two NICs in the system, only the Mellanox and the IPMI card are connected. Bridge netfilter sysctl settings: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 Since discovering this bridge-nf sysctl workaround, we've found a few pages on the net recommending exactly this (including Openstack's latest network troubleshooting page and a launchpad bug report that linked to this blog-post that has a great description of the problem and the solution)....it's easier to find stuff when you know what to search for :), but we haven't found anything on the DNAT issue that it causes.

    Read the article

  • VMWare Network bug in multiple VMWare Workstation versions if using a hardcoded IP address

    - by onyxruby
    I'm having a very tricky problem with some of my VM sessions being unable to reach the Internet or even ping the gateway. I have just set up a new VM Workstation (7) on a W2K8 64bit server (I'll be converting to ESXI 4 once I can find a decent book on it, so for the meanwhile I use workstation). I have imported a number of VM's and setup some new ones on the server.In short the problem with some of the VM's being unable to reach the Internet is that they can't reach the gateway. I've looking at a number of things and can pretty safely rule out the following: Switch, Router, DHCP Server, DNS, Client IP configuration, Routes and typos. The problem is that some of the new clients cannot reach the gateway if their IP address is hardcoded, they can't even ping it by IP address. That rules out DNS and DHCP. Now, if I allow them to get their IP address by DHCP they can reach the gateway and Internet without issue. The interesting thing on this, is that this behavior occurs even if I leave the DNS information hardcoded under TCP/IP settings. It doesn't work unless the IP and gateway are handed out by DHCP even though the same information IP info is being used by the host. Fundamentally from the standpoint of the clients, they are trying to reach the exact same gateway using the exact same IP information regardless of whether they are hardcoded or assigned by DHCP. Here's an example of one client. IP Address 192.168.7.66 - Subnet Mask 255.255.255.0 - Gateway 192.168.7.254 - DNS1 192.168.7.44 - DNS2 192.168.7.254. The issue occurs across six different microsoft operating systems, Windows 7 and Windows 2008 variants all have the issue. My W2K3, XP, Vista and W98 clients all work without issue with hardcoded IP addresses. I have tried things like rearranging the DNS order, flushing DNS and so on. It's not a routing or switch issue as the clients can work just fine if they get their IP by DHCP. It's not a paramater issue as the exact same paramaters are handed out by DHCP as I plug in by hand. It's not a DNS issue as clients cant reach other clients even with IP addresses only. I have run a tracert to the gateway by IP address and it times out on the very first hop before failing on hop3 with destination host unreachable. If I get the IP address by DHCP the tracert finds the gateway (and Internet) without issue. I have read a few other posts online in forums talking about this problem randomly occuring over the years in other VM versions as well, so I suspect some kind of long standing bug. Does anyone have any ideas on this? Is it possibly a bug with Windows 7 and W2K clients under VM?

    Read the article

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Can RPM packages be installed into Cygwin?

    - by Dejian Zhao
    I noticed that there is a command - rpm - under Cygwin 1.7. Does that mean RPM packages can be installed into Cygwin? I tried to install ncbi-blast-2.2.26+-3.i686.rpm (see: ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ ) into Cygwin 1.7.13 with the command "install -i ncbi-blast-2.2.26+-3.i686.rpm". However, error message appeared as below. I tried to search for the missing libs using the setup.exe of Cygwin. It seems that some of them were not present, such as libc.so.6, libdl.so.2, libm.so.6, libnsl.so.1, and libz.so.1. Where can I get these libs? Thanks! $ rpm -i ncbi-blast-2.2.26+-3.i686.rpm error: Failed dependencies: /usr/bin/perl is needed by ncbi-blast-2.2.26+-3 libbz2.so.1 is needed by ncbi-blast-2.2.26+-3 libc.so.6 is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.1.3) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libc.so.6(GLIBC_2.3) is needed by ncbi-blast-2.2.26+-3 libdl.so.2 is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libdl.so.2(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1 is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GCC_3.0) is needed by ncbi-blast-2.2.26+-3 libgcc_s.so.1(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libm.so.6 is needed by ncbi-blast-2.2.26+-3 libnsl.so.1 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0 is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.0) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.1) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.2) is needed by ncbi-blast-2.2.26+-3 libpthread.so.0(GLIBC_2.3.2) is needed by ncbi-blast-2.2.26+-3 librt.so.1 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6 is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(CXXABI_1.3) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4) is needed by ncbi-blast-2.2.26+-3 libstdc++.so.6(GLIBCXX_3.4.5) is needed by ncbi-blast-2.2.26+-3 libz.so.1 is needed by ncbi-blast-2.2.26+-3 perl(Archive::Tar) is needed by ncbi-blast-2.2.26+-3 perl(Digest::MD5) is needed by ncbi-blast-2.2.26+-3 perl(File::Temp) is needed by ncbi-blast-2.2.26+-3 perl(File::stat) is needed by ncbi-blast-2.2.26+-3 perl(Getopt::Long) is needed by ncbi-blast-2.2.26+-3 perl(Net::FTP) is needed by ncbi-blast-2.2.26+-3 perl(Pod::Usage) is needed by ncbi-blast-2.2.26+-3 perl(constant) is needed by ncbi-blast-2.2.26+-3 perl(strict) is needed by ncbi-blast-2.2.26+-3 perl(warnings) is needed by ncbi-blast-2.2.26+-3

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >