Search Results

Search found 4808 results on 193 pages for 'reserved instances'.

Page 89/193 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Moved to SSD and now getting "the disk drive for / is not ready yet"

    - by dmt0
    I moved my Ubuntu 12.04 install over to an SSD drive. Copied all directories except for the ones most often written to - var, tmp, ... Reinstalled grub into SSD by booting with live CD and following the commands in this post: How to move Ubuntu to an SSD This seemed to work fine, because when I press "e" in grub menu, I see the expected UUIDs. But right after grub I get could not log bootup: Address already in use the disk drive for / is not ready yet or not present. If I skip, I get same for /tmp /run, and other dirs If I go into manual recovery and do mount -n -o remount,rw / it turns out that everything can mount no problem. Can't get my head around this one. My fstab seems right. grub is right. AHCI in bios is enabled. Why is this happening? What can I do to fix it? When I do drop into shell from this error and get to mount things manually, how do I get the OS to continue loading? Thank you guys for any ideas you can give me. Here's what my fstab looks like right now: # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 UUID=67fc8a7a-f1db-485c-88bd-e007c214244f / ext4 defaults,noatime,discard 0 1 # swap was on /dev/sda3 during installation UUID=6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6 none swap sw 0 0 UUID=7397729b-2125-4b1d-b5eb-28866898d773 /hdd ext4 errors=remount-ro 0 1 /hdd/home /home none bind 0 0 /hdd/run /run none bind 0 0 /hdd/tmp /tmp none bind 0 0 /hdd/var /var none bind 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 output from blkid: /dev/sda1: LABEL="System Reserved" UUID="EABC56C1BC568849" TYPE="ntfs" /dev/sda2: UUID="7CCC6124CC60D9C2" TYPE="ntfs" /dev/sda3: UUID="6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6" TYPE="swap" /dev/sda5: UUID="7397729b-2125-4b1d-b5eb-28866898d773" TYPE="ext4" /dev/sdb1: UUID="67fc8a7a-f1db-485c-88bd-e007c214244f" TYPE="ext4" relevant from fdisk -l: Device Boot Start End Blocks Id System /dev/sdb1 2048 115345407 57671680 83 Linux

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • I need help installing Ubuntu 11.10 to multi-drive system

    - by CookyMonzta
    I have a machine with 3 hard drives; the primary, which is 750GB (drive 0), and 2 others, each of which is 640GB (drives 1 and 2). On the last screen before the actual installation begins, this is how my hard drive configuration looks: /dev/sda [DISK0, 750GB] /dev/sda1 ntfs 104MB [Win7 System Reserved] /dev/sda2 ntfs 499,997MB [Windows 7 Pro] free space 250,052MB [This space intended for Windows 8] /dev/sdb [DISK1, 640GB] /dev/sdb1 ntfs 400,085MB [Windows XP Pro] free space 240,049MB [This space intended for Ubuntu] /dev/sdc [DISK2, 640GB] [This drive intended for various backups] free space 160,033MB /dev/sdc5 ntfs 480,101MB [Acronis Secure Zone] As you can see, I have 3 drives, all SATA. I have Win7 on my first drive (0), WinXP on my second drive (1) and a secure zone for daily backups on my third drive (2). I want to put Ubuntu 11.10 Oneiric Ocelot on the drive that also has XP. I've already used 400GB for XP and I have 240GB remaining, for which, my intention was to create a 4GB swap file and use the rest for Ubuntu itself. This is what my second hard drive looked like, for my intended setup before installation: /dev/sdb /dev/sdb5 swap 4,095MB [Linux swap] /dev/sdb6 ext4 235,951MB [Ubuntu 11.10] Needless to say, this is only the second time I have attempted to install Linux. I managed to get 7.10 Gutsy Gibbon working on an old machine. I have two problems with this installation: Ubuntu asks for a location to install the boot loader (i.e., "Device for Boot Loader Installation"). I already have a boot loader; namely, Acronis OS Selector (from Acronis Disk Director 11). So I decided to put the Ubuntu boot loader in /dev/sdb6 (where I intend to install Ubuntu), to keep it from interfering with my Acronis OS Selector. Once I hit "Install now", I ended up with the following error: "No root file system is defined. Please correct this from the partitioning menu." What am I missing? Did I attempt to put the boot loader in the wrong place? I assume I did, because as I am writing this entry, I am looking at LinuxIdentity.com's Ubuntu 11.04 Natty Narwhal magazine, and I see a screenshot (Figure 7 on Page 13) that implies that the boot loader can be installed anywhere, including the first hard drive (in the MBR, which would obviously force me to reinstall the Acronis OS Selector) or even on a floppy. But why do I get an undefined root file system error? I thought /dev/sdb6 was the root file. Obviously I'm missing something in the installation procedure. Should I try installing it in Windows using the WUBI Installer? I assume that, if I attempt to install Ubuntu from WinXP (on the second drive), it will automatically install Ubuntu on the empty partition alongside XP. But will I have the option of creating a swap partition? And what if the WUBI Installer searches all of my drives and decides to install Ubuntu on my first drive's empty partition (which I have left empty for Win8 upon its release)?

    Read the article

  • Problem with DualBooting Ubuntu 13.10 and Win7

    - by VinArrow
    this is my first post here on AskUbuntu, not first time using it though. I wanted to install Ubuntu 13.10 on my PC to have all my work stuff there and leave Win7 for gaming. So i did my research on how to Dual Boot when you already have Win7 installed, here are the steps i took Used Disk Management on Win7 and shrunk that partition, leaving 80GB free for Ubuntu. Made a Bootable pendrive following the instructions on Ubuntu`s website. During the installation steps there was supposed to be a Install alongside Win7, but there wasnt, so i chose Something else. Everything was fine and i was able to install Ubuntu no problem on my unallocated 80GB partition (76GB Ubuntu + 4GB swap) There was a prompt for me to restart my PC and so I did expecting to see the dual boot screen (grub right?) Now, when i restarted my PC, Grub never showed up and it booted straight to Windows. Then I did some more research and found out that that could happen. Tried three things then Plugging in bootable pendrive again and selected Try Ubuntu without installing. Then i followed some instructions found here (How can I repair grub? (How to get Ubuntu back after installing Windows?)) and i could chroot into my Ubuntu install just fine. Repaired grub as instructed on that link, restarted the PC and booted straight into Win7 again. Again, used the bootable pen drive to Try Ubuntu... and used the Boot-repair tool (recommended repair). Again, booted straight into Win7. Lastly, i installed easyBCD on my Win7 and made a new entry for Ubuntu (Linux/BSD). When i rebooted the PC, there was the option to choose between Win7 and Linux, chose linux and it didnt work, taking me straight to a command line-like enviroment that read Minimum bash like scripting or something, as if I didn`t have a Linux OS installed. So, I thought I`d try and repair my Ubuntu install. And during the Installation method step there was the choice to install alongside Ubuntu 13.10! and that right there drove me crazy. Here is a screenshot of gparted showing how things are set up now http://imageshack.us/f/801/77u3.png/ Notice on the left-hand side how i can access my installation files just fine. sdb1- win7 reserved space, sdb2- win7 OS, sdb3- 76GB ubuntu install, sdb5- 4GB swap area. Does anyone know why my Ubuntu 13.10 is not being recognized? and what should I do to get it working? Thanks and sorry for the long read and bad english! (BIOS = legacy)

    Read the article

  • Can't attach EC2 instance to Network Interface

    - by Ian Warburton
    When trying to attach a network interface, it says... No instances were found for this availability zone. My instance is in us-east-1c and my network interface is in us-east-1b. Is that significant? If so, how do I create the VPC in the same zone and if not then why this error? EDIT: I've re-created the VPC and the Network Interface is now us-east-1c and the EC2 instance is also us-east-1c. Same error message though!

    Read the article

  • Mapping Drive Error - System Error 1808

    - by Julian Easterling
    A vendor is attempting to map and preserve a network drive using nt authority/system; so it stays persistent when the interactive session of the server is lost. They were able to do this on one server (Windows 2008 R2) but not a second computer (also Windows 2008 R2). D:\PsExec.exe -s cmd.exe PsExec v1.98 - Execute processes remotely Copyright (C) 2001-2010 Mark Russinovich Sysinternals - www.sysinternals.com Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. all rights reserved. C:\Windows\system32>whoami nt authority\system C:\Windows\system32>net use New connections will be remembered. Status Local Remote Network -------------------------------------------------------------------- OK X: \\netapp1\share1 Microsoft Windows Network The command completed successfully. C:\Windows\system32>net use q: \\netapp1\share1 System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. C:\Windows\system32> I am unsure on how to set up a "machine account mapping" which will preserve the drive letter of the Netapp path being mapped, so that the service account running a Windows service can continue to access the share after interactive logon has expired on the server. Since they were able to do this on one server but not another, I'm not sure how to troubleshoot the problem? Any suggestions?

    Read the article

  • AWStats on Plesk consumes all of CPU and crashes server - how do you disable plesk

    - by columbo
    I have Plesk 9.0.1 running on a Red Hat server. Every week or so at about 4:10 AM the server locks up. At this time the server CPU usage shoots from 4% to 90% at the same time as a mass of awstats.pl processes start (I can't see how many as my datat only shows the top 30 processes, but all of these are awstats.pl). I turned off awstats through the Plesk control panel for all but 5 domains but I still get 90% CPU usage and at least 30 instances of awstats.pl happening at 4:10am as usual. Does anyone know why this may be? Does anyone know how to disable awstats (I have stats covered using piwik)? Or how do I uninstall awstats without snarling up Plesk?

    Read the article

  • Why does chkdsk produce "Correcting error in index $I30 for file 33267" and then freeze?

    - by wcpro
    I am running chkdsk like this: chkdsk k: /x I get this error: Correcting error in index $I30 for file 33267. What does it mean? Here's the full problem: Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32>chkdsk e: /r The type of the file system is NTFS. Volume label is RodDrobo. CHKDSK is verifying files (stage 1 of 5)... 162048 file records processed. File verification completed. 0 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 10 percent complete. (167130 of 208124 index entries processed) Correcting error in index $I30 for file 33267. Correcting error in index $I30 for file 33267. <--- it always reaches this spot, then hangs forever

    Read the article

  • What is my miniport's service name?

    - by Ian Boyd
    i am trying to query the physical sector size of my drive using fsutil: C:\Windows\system32>fsutil fsinfo ntfsinfo c: NTFS Volume Serial Number : 0x78cc11b2cc116c1e Version : 3.1 Number Sectors : 0x000000003a382fff Total Clusters : 0x00000000074705ff Free Clusters : 0x00000000022fc29b Total Reserved : 0x00000000000007d0 Bytes Per Sector : 512 Bytes Per Physical Sector : <Not Supported> Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x00000000305c0000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000003a382ff Mft Zone Start : 0x0000000006951940 Mft Zone End : 0x0000000006951c80 RM Identifier: 19B22CBE-570D-19DE-9C72-CD758F800DDC You can see that the Bytes Per Physical Sector value is Not Supported: Bytes Per Physical Sector : <Not Supported> In KB Article Microsoft support policy for 4K sector hard drives in Windows, Microsoft says: If fsutil.exe continues to display "Bytes Per Physical Sector : " after you apply the latest storage driver and the required hotfixes, make sure that the following registry path exists: HKLM\CurrentControlSet\Services\<miniport’s service name>\Parameters\Device\ Name: EnableQueryAccessAlignment Type: REG_DWORD Value: 1: Enable The only thing i don't know is what my Miniport's service name is. What is my miniport's service name. i know that my SATA drives are in AHCI mode, and AHCI uses the msahci driver service: Is that my miniport service? "MSAHCI"? See also Hitachi - Advanced Format Technology Brief RMPrepUSB - Advanced Format (4K sector) hard disks Microsoft support policy for 4K sector hard drives in Windows OSR Online - Advance Disk Format support in Storport Virtual Mniport diver Default cluster size for NTFS, FAT, and exFAT Wikipedia - Advanced Format

    Read the article

  • Error with python-setuptools doing "sudo easy_install python-graph-core"

    - by Dan
    Using easy_install, part of the python-setuptools, I get the following error: $ sudo easy_install python-graph-core [sudo] password for dan: Searching for python-graph-core Reading http://pypi.python.org/simple/python-graph-core/ Reading http://code.google.com/p/python-graph/ Reading http://code.google.com/p/python-graph/downloads/list?can=1 Reading http://code.google.com/p/python-graph/downloads/list Best match: python-graph-core 1.7.0 Downloading http://python-graph.googlecode.com/files/python-graph-core-1.7.0.tar.gz Processing python-graph-core-1.7.0.tar.gz Running python-graph-core-1.7.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-GwpYiM/python-graph-core-1.7.0/egg-dist-tmp-1yqbyV setup.py:8: Warning: 'as' will become a reserved keyword in Python 2.6 Traceback (most recent call last): File "/usr/bin/easy_install", line 8, in <module> load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 1671, in main with_ei_usage(lambda: File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 1659, in with_ei_usage return f() File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 1675, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 211, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 446, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 476, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 655, in install_eggs return self.build_and_install(setup_script, setup_base) File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 930, in build_and_install self.run_setup(setup_script, setup_base, args) File "/usr/lib/python2.5/site-packages/setuptools/command/easy_install.py", line 919, in run_setup run_setup(setup_script, args) File "/usr/lib/python2.5/site-packages/setuptools/sandbox.py", line 27, in run_setup lambda: execfile( File "/usr/lib/python2.5/site-packages/setuptools/sandbox.py", line 63, in run return func() File "/usr/lib/python2.5/site-packages/setuptools/sandbox.py", line 29, in <lambda> {'__file__':setup_script, '__name__':'__main__'} File "setup.py", line 8 except ImportError as ie: ^ SyntaxError: invalid syntax Any suggestions to what I may be doing wrong? Thanks, Dan

    Read the article

  • Problems setting NTP sever with w32tm for a DC that is a Hyper-V guest

    - by R.Tonheim
    Hello ! I have tried to sett my DC to get its time from several NTP severs. I follow this answer (http://serverfault.com/questions/24298/w32time-sync-problems-for-hyper-v-guests-w32time-event-ids-38-24-29-35/24299#24299) to do it. First I disable Time Synchronization in the Hyper-V Integration Services for each guest. Then restart the Windows Time serviceon the guest. I had before this used this command: w32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update And the cmd sad: The command completed successfully. But the time was still 10 min wrong... I run w32tm again after restarted the DC without it having any effect. The w32tm /query /status still say: "Source: Local CMOS Clock" FROM MY CMD: Microsoft Windows [Version 6.0.6002] Copyright (c) 2006 Microsoft Corporation. All rights reserved. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHGw32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update The command completed successfully. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHG

    Read the article

  • protobuf-net: Issues deserializing DataMember fields in lieu of read-only property

    - by Paul Smith
    I'm having issues deserializing certain properties of ORM-generated entities using protobuf-net. I suspect something in the way the ORM manages serialization attributes on read-only properties (uses public backing fields with DataMember attributes & [de]serializes) those instead of the corresponding read-only property, which has an IgnoreDataMember attribute). Guid properties might have issues of their own, but the field vs. property thing is my working theory now. Here's a simplified example of the code. Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. In this scenario I get one of two results (depending on the entity, haven't isolated the specific commonality yet). The deserialized clone either: ...has a different AccountID from the original, or ...throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified; hopefully haven't removed the cause of the issue in the process). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes): [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 0)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 1)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize matching object graphs here, so this attribute arrangement apparently causes those serializers to correctly assign to the public backing field when deserializing. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but always either get the 'incorrect wire type ... Guid' exception, or the Guid property (field) not deserializing correctly. So the specific questions are, is there a quick workaround for this, and/or is there an explanation for both of outcomes 1 & 2 above, and/or can protobuf-net somehow be corralled into behaving like WCF in cases like this (i.e. follow the same DataMember/IgnoreDataMember semantics)? We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern, but IMO, "normal" should be to have objects and serialization contracts decoupled.) Thanks!

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • Fix ACPI DSDT of Amilo Pa 1538

    - by kayahr
    I have a Fujitsu-Siemens Amilo Pa 1538 laptops and installed Ubuntu 10.04 on it (Kernel 2.6.32). Main problem is that the fans of the notebook are always on even when the temperature is low. Another problem is that the display brightness can not be adjusted. Since some years I use Dell laptops and never experienced any ACPI problems so I thought that it is no longer needed to fix ACPI tables but now I have this crap of a laptop and I think I have to do it. Unfortunately I need some help repairing the DSDT. The dsdt.dat and dsdt.dsl of the laptop can be found here: http://www.ailis.de/~k/permdata/20100420/dsdt/ Compiling the DSDT gives the following output: # iasl -sa dsdt.dsl Intel ACPI Component Architecture ASL Optimizing Compiler version 20090521 [Jun 30 2009] Copyright (C) 2000 - 2009 Intel Corporation Supports ACPI Specification Revision 3.0a dsdt.dsl 81: Method (\_WAK, 1, NotSerialized) Warning 1080 - ^ Reserved method must return a value (_WAK) dsdt.dsl 207: Method (_L10, 0, NotSerialized) Warning 1087 - ^ Not all control paths return a value (_L10) dsdt.dsl 2861: Method (NVIF, 3, NotSerialized) Warning 1087 - ^ Not all control paths return a value (NVIF) dsdt.dsl 4551: Store (\_SB.PCI0.LPC0.PMRD (0xFA), Local0) Warning 1099 - Statement is unreachable ^ ASL Input: dsdt.dsl - 4962 lines, 162828 bytes, 2300 keywords AML Output: dsdt.aml - 17627 bytes, 591 named objects, 1709 executable opcodes Compilation complete. 0 Errors, 4 Warnings, 0 Remarks, 519 Optimizations Anyone here with DSDT experience who can help me fixing the DSDT?

    Read the article

  • Sql Server 2008 - Cannot connect to local default instance

    - by Tone
    I recently rebuilt my machine to Windows 7 x64, installed Sql Server 2008 enterprise. I can connect fine to other remote instances via Management Studio (be they 2000, 2005 or 2008), but i cannot find my local default instance. I have verified that a directory was created for the default instance C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER I can connect to SQLEXPRESS instance fine I have rerun setup to ensure I have everything installed I have verified that the SQLSERVER(MSSQLSERVER) service is running I have tried brownsing for the instance and see all the others available except my local one I have tried using this for servername: TSOUTHERLANDPC\MSSQLSERVER, the first part being my local machine name My issue is not the same as this post or this post. Any ideas?

    Read the article

  • HP Loadrunner failed to start .NET Diagnostics probe

    - by Johnbo
    I've got a HP Diagnostics Server (commander mode) installed in the same PC where HP Loadrunner is. I've installed the .NET probe in the web application server. When I navigate localhost:2006/registrar/health I can see the CommandingServer and three instances of the probe, all in green and connected. Then, when in LoadRunner controller I enable Diagnostics, select the probe and start the scenario, I get the next error: Failed to start J2EE/.NET Diagnostics run. (Facade error: Unable to send 'startRun' notification to probe MyAgent.1347615505142149) I've looked at the firewall logs and the rule that lets the server send commands to the probes has been matched three times. What else could it be what doesn't let me start the probe?

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • Using VLC as RTSP server

    - by StackedCrooked
    I'm trying to figure out how to use the server capabilities of VLC. More specifically, how to export an SDP file when RTP streaming. In chapter 4 in the section related to RTP Streaming examples for server and client are given: vlc -vvv input_stream --sout '#rtp{dst=192.168.0.12,port=1234,sdp=rtsp://server.example.org:8080/test.sdp}' vlc rtsp://server.example.org:8080/test.sdp It's not very clear to me how to make it actually work. I have tried these two commands for server and client using two cmd instances: vlc -I rc screen:// --sout=#rtp{dst=127.0.0.1,port=4444,sdp=rtsp://localhost:8080/test.sdp} vlc -I rc rtsp://localhost:8080/test.sdp Invoking the second command causes the first one to crash. The second command shows the error message "could not connect to localhost:8080".

    Read the article

  • What does it mean when ARP shows <incomplete> on eth1

    - by Geoff Dalgas
    We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

    Read the article

  • Why is usable RAM less than total RAM?

    - by D Connors
    My girlfriend bought a laptop last week. It's a core 2 duo with 4 GB We installed vista 64bit, and one of the first things we did was right click on "My computer" to see gthe properties. Immediately we noticed something strange about her RAM, the line said: Installed memory (RAM): 4,00 GB (3,68 GB usable) I told her not to worry, thinking it must be something about the laptop hardware (considering her vista installation came from the same DVD as mine, and I never noticed anything like that on my 4 GB desktop). One hour ago, it got worse. We looked at Properties again, and it now says: Installed memory (RAM): 4,00 GB (2,98 GB usable) What does that mean? Are those 1,02 GB missing or being used by the system? EDIT: There is a possibility that the sytem information is wrong. I just noticed that it reports an intel T6500 processor, when it's actually a T6400. How can I find out how much RAM is really available to the system? EDIT2: Checking the resource monitors, it says 1003 MB are reserved for the hardware. Is that good or bad? Thanks

    Read the article

  • How do I solve MSSQL 2008 install error, "The MOF compiler could not connect with the WMI server"?

    - by nbolton
    Possibly related to: SQL Server 2008 Install fails error reading etwcls.mof After manually removing MSSQL 2008 from my system (uninstall failed to remove two instances), I receive the following error when trying to re-install: The MOF compiler could not connect with the WMI server. This is either because of a semantic error such as an incompatibility with the existing WMI repository or an actual error such as the failure of the WMI server to start. It seems that mofcomp is failing with one of the .mof files, but I'm not sure which, or why. Digging through the connect article gave some indications, but no solution. I've run winmgmt /salvagerepository, which returns "WMI repository is consistent". Currently, I'm unable to install MSSQL 2008. Please help!

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >