Search Results

Search found 14146 results on 566 pages for 'iis manager'.

Page 480/566 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • The HTTP verb POST used to access path '[my path]' is not allowed.

    - by Jed
    I am receiving an error that states: "The HTTP verb POST used to access path '[my path]' is not allowed.". The error is being caused by the fact that I am implementing an HTML form element that uses the POST method and does not explicitly define an .aspx page in its ACTION parameter. For example: <form action="" method="post"> <input type="submit" /> </form> The HTML above is on a file at "/foo/default.aspx". Now, if the user points the URL to the root directory "foo" without specifying the aspx file (i.e. "http://localhost/foo") and then submits the form, the error "The HTTP verb POST used to access path '/foo' is not allowed." will be thrown. However, if the user goes to "http://localhost/foo/default.aspx" and then submits the form, all goes well (even if the ACTION parameter is left empty). Note: If I explicitly add the name of the .aspx (default.aspx) page to the ACTION parameter, no errors are thrown. So the example below works fine regardless if the user defines the name of the file in the URL or not. <form action="default.aspx" method="post"> <input type="submit" /> </form> I was curious as to why the error was being thrown, so I read a Microsoft KB that states This problem occurs because a client makes an HTTP request by sending the POST method to a static HTML page. Static HTML pages do not support the POST method. I suppose the core of the explanation makes sense, however in my case, my form is not being sent to a static html page - it's being sent to the same page that the html form lives on (default.aspx)... this is implicit to an ACTION param that is left empty. Is it possible to configure IIS (or otherwise) that will allow us to do form POSTing and keep the ACTION param empty?

    Read the article

  • MySQL + Joomla + remote c# access

    - by Jimmy
    Hello, I work on a Joomla web site, installed on a MySQL database and running on IIS7. It's all working fine. I now need to add functionality that lets (Joomla-)registered users change some configuration data. Though I haven't done this yet, it looks straightforward enough to do with Joomla. The data is private so all external access will be done through HTTPS. I also need an existing c# program, running on another machine, to read that configuration data. Sure enough, this data access needs to be as fast as possible. The data will be small (and filtered by query), but the latency should be kept to a minimum. A short-term, client-side cache (less than a minute, in case a user updates his configuration data) seems like a good idea. I have done practically zero database/asp programming so far, so what's the best way of doing that last step? Should the c# program access the database 'directly' (using what? LINQ?) or setup some sort of Facade (SOAP?) service? If a service should be used, should it be done through Joomla or with ASP on IIS? Thanks

    Read the article

  • Configuration Error , Finding assembly after I swapped referenced dll out. Visual Studio 2003

    - by TampaRich
    Here is the situation. I had a clean build of my asp.net web application working. I then went into the bin folder under the web app and replaced two referenced dll's with two older version of the same dll's. (Same name etc.) After testing I replaced those dll's back to the new ones and now my application keeps throwing the configuration error === Pre-bind state information === LOG: DisplayName = xxxxx.xxxx.Personalization (Partial) LOG: Appbase = file:///c:/inetpub/wwwroot/appname LOG: Initial PrivatePath = bin Calling assembly : (Unknown). LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). I found this issue on the web and tried all the solutions to it but nothing worked. I then went into all my projects that it references under the solution and cleared out the bin/debug folder in each, I cleared out the obj folder under each and also deleted the temporary files associated with the application. I rebuilt it and it still will not work due to this error Not sure what is causing this or how to fix this issue. I have tried restarting IIS, stopping index services which was said to be a known issue. This is .net framework 1.1 app and visual studio 2003 Any suggestions would be great. Thanks.

    Read the article

  • IIS7 URL Rewriting: How not to drop HTTPS protocol from rewritten URL?

    - by Scott Mitchell
    I'm working on a website that's using IIS 7's URL rewriting feature to do a permanent redirect from example.com to www.example.com, as well as rewrites from similar domain names to the "main" one, such as from www.examples.com to www.example.com. This rewrite rule - shown below - has worked well for sometime now. However, we recently added HTTPS support and noticed that if users visit one of the URLs to be rewritten to www.example.com then HTTPS is dropped. For instance, if a user visits https://example.com they get redirected to http://www.example.com, whereas we would like them to be sent to https://www.example.com. Here is the rewrite rule of interest (in Web.config): <rule name="Canonical Host Name" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_HOST}" pattern="^example\.com$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.net$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?example\.info$" /> <add input="{HTTP_HOST}" pattern="^(www\.)?examples\.com$" /> </conditions> <action type="Redirect" url="http://www.example.com/{R:1}" redirectType="Permanent" /> </rule> As you can see, the action element's url attribute points directly to http://, so I get why https://example.com is redirected to http://www.example.com. My question is, how do I fix this? I tried (naively) to just drop the http:// part from the url attribute, but that didn't work. Thanks!

    Read the article

  • Designers, Expression or SharePoint Designer, and real source control

    - by David Lively
    I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN. My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server. The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#. Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP. Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE. I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions? (I know I've asked this question in the past, but I haven't yet found a solution.)

    Read the article

  • Looking for an email/report templating engine with database backend - for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • LSI RAID monitor reports "Consistency Check inconsistency logging disabled"

    - by carlpett
    I have a server with a LSI MegaRAID 9261-8i controller. Recently I started getting alerts like this one: Controller ID: 1 Consistency Check inconsistency logging disabled, too many inconsistencies on VD: 0 Generated on:Sat May 12 04:06:40 2012 SYSTEM DETAILS--- IP Address: 192.168.1.29 OS Name: Windows 7 x64 OS Version: 6.01 Driver Name: megasas.sys Driver Version: 4.5.1.64 IMAGE DETAILS--- BIOS Version: 2.120.33-1197 Firmware Package Version: 12.12.0-0045 Firmware Version: 3.21.00_4.11.05.00_0x05000000 VD 0 is a RAID mirror containing the system disk. I have searched and read, but cannot find any trace of how to actually do anything about this. I tried running a scandisk but that did not find anything (as I expected, since scandisk reads the disks as exposed by the controller, right?). The MegaRAID Storage Manager does not as far as I can see have any options for checking or fixing physical disks. The program claims the VD is "healty", and both disks have Error count 0. Also a bit strange is the System details in the message... The IP address is associated with the RAS (dial in) interface, and the OS should be Windows Server 2011 SBS. Has anyone else experienced this before? What can be done?

    Read the article

  • LDOMs in Solaris 11 don't work

    - by Giovanni
    I just got a new Sun Fire T2000, installed Solaris 11 and was going to configure LDOMs. However "ldmd" can't be started and in turn ldm doesn't work. root@solaris11:~# ldm Failed to connect to logical domain manager: Connection refused root@solaris11:~# svcadm enable svc:/ldoms/ldmd:default root@solaris11:~# tail /var/svc/log/ldoms-ldmd\:default.log [ May 28 12:56:22 Enabled. ] [ May 28 12:56:22 Executing start method ("/opt/SUNWldm/bin/ldmd_start"). ] Disabling service because this domain is not a control domain [ May 28 12:56:22 Method "start" exited with status 0. ] [ May 28 12:56:22 Stopping because service disabled. ] [ May 28 12:56:22 Executing stop method (:kill). ] Why is this not a control domain? There is no other domain on the box (as far as I can tell). I have upgraded the firmware to the latest 6.7.12, booted with reset_nvram, nothing helped ... sc> showhost Sun-Fire-T2000 System Firmware 6.7.12 2011/07/06 20:03 Host flash versions: OBP 4.30.4.d 2011/07/06 14:29 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 What else should I do? Thanks!

    Read the article

  • Reinstall Acer OEM Windows 8, Windows 8 Recovery for Acer Aspire V5 122p

    - by stwindr
    My Acer Aspire V5-122P-61456G50NSS, model - MS2377, has crashed all together. It came preloaded with Windows 8 and I upgraded to windows 8.1 3-4 days before crash. Unfortunately I did not make any recovery media before the crash. While accessing the eRecovery on Acer store with my PC's serial no. it says nor RCD available for this. I tried recovery by loading recovery manager (Left Alt + F10) Various other advanced startup options (like holding shift key while turning on or pressing F8 key) returns nothing but no luck. However I am able to enter BIOS. After doing research on above condition on various PC forums, now my questions: I read that a 'Windows Recovery Drive' can be made on any PC running Windows 8 and could be used to repair another PC. Does anybody in SuperUser community have that (or a link to download the same from somewhere? as I'm unable to find anybody running windows 8 among my friends). I downloaded a window 8 Pro ISO and made a bootable USB. I was able to go to 'Repair Your Computer' option and after going to 'Reset your PC' option found that my recovery partition has gone/missing. I tried all options available but no luck. Then I tried to install with that Windows 8 Pro ISO but got message: "The product key entered does not match any of the Windows images available for installation. Enter a different product key". before this message I did not got any form to fill product key! Does this mean that the installer was picking up the key from BIOS (OEM Key)? and may be the installation did not succeeded because OEM Windows version was Window 8 and I was trying to install Windows 8 Pro. If that is the case then, could somebody please send me link to download an Windows 8 ISO? I am helpless and couldn't find anywhere on internet (without having to pay for a new key, but I should not pay as the installer will use OEM key).

    Read the article

  • virtualbox installing iso Error "could not find a valid v7 on sda"

    - by MountainX
    I decided to try this suggestion to install Android x86 on Virtual Box. I'm following those steps and I'm at #4. I used the android-x86-4.0-RC1-amd_brazos.iso (because I have a ThinkPad that this might be compatible with and it seemed as good as any of the other choices...) I'm getting an endless series of errors: .[numbers] VFS: could not find a valid v7 on sda. .[numbers] VFS: could not find a valid v7 on sda. repeating... For storage, I made a VDI image sized at 4.0 GB (dynamically allocated) attached on SATA Port 0. Background and more details: I'm running Kubuntu 12.04. I installed VirtualBox 4.1.12 after adding deb http://download.virtualbox.org/virtualbox/debian precise contrib to my sources. EDIT: Now I'm wondering if I installed the right package. I verified that I'm running 4.1.12. But I installed it with apt-get install virtualbox instead of the recommended apt-get install virtualbox-4.1. I checked just now and see this: apt-cache search virtualbox virtualbox - x86 virtualization solution - base binaries virtualbox-4.1 - Oracle VM VirtualBox But when I run VirtualBox, I get the Oracle VM VirtualBox Manager version 4.1.12, so I think I'm OK. I did see one minor issue (possibly related to this question) when installing VB, but in my case I don't think it is actually an error at all: * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. VirtualBox installed and seems to be running fine. I just can't get the ISO image to install. The error is as shown above.

    Read the article

  • Problems with cross forest authentication in SQL Reporting

    - by chunkyb2002
    We're currently running an SQL 2008 R2 Cluster with Reporting Services running, all for use with System Center Operations Manager 2007 R2 (RU3). Our users are on a different domains to the SCOM and SQL servers (we have two domains as we are in the process of a domain migration) We have no problems at all with users accessing reports via the SCOM Console or the Web interface if they are on the new domain which runs at 2008 R2 functional level. However users on the old domain (which runs at a 2003 functional level) cannot access reports on SCOM or via the web interface (http://sqlserver/reports) The error we get is: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) For more information about this error navigate to the report server on the local server machine, or enable remote errors Taking the errors advise we logged on to the SQL server as a user on the old domain (which works fine!) and then try to authenticate with the reporting via the web interface which produces this most useful of errors: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) The creator of this fault did not specify a Reason. Things we've tried: Recreating the trust between domains Ensuring the SQL Reporting service account was a member of Windows Authorization Access Group on the 2003 domain Added users on the 2003 domain explicitly to the Reporting Users group on the SQL Server Has anyone come across this issue before perhaps in a different scenario? If so how was it resolved? Thanks in advance for any help.

    Read the article

  • TSM Backup Fails with VSS

    - by user176320
    Since we rebuilt our servers to windows servers 2008 x64 r2 we have perpetual problems with scheduled backup (shadow copy). In the Application event log, the following events are logged on both nodes: The description for Event ID 4112 from source TsmVssPlugin cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: VSS processing encountered error 'VSS_E_UNEXPECTED_PROVIDER_ERROR' in the Volume Shadow Copy API 'AddToSnapshotSet'. For more information, see the IBM Tivoli Storage Manager client error log. Re installing TSM didn't change anything and still logging same events. TSM Error log file shows following: ANS1327W The snapshot operation for 'C:' failed with error code: -1. ANS5250E An unexpected error was encountered. TSM function name : BaStartSnapshot TSM function : initializeSnapshotSet() failed, check Microsoft Application event log for VSS errors TSM return code : -1

    Read the article

  • SD Card reader not working on Sony Vaio

    - by TessellatingHeckler
    This laptop (Sony Vaio VGN-Z31MN/B PCG-6z2m) has been installed with Windows 7 64 bit, all the drivers from Sony's VAIO site are installed, and everything in Device Manager both (a) has a driver and (b) shows as working, no exclamation marks or warnings. "Hide empty drives" in Folder options is disabled so the card reader appears, but will not read the card ("please insert a disk in drive O:"). Previously, when the laptop had Windows XP on it, it could read the same card. Also, Windows update suggested driver ("SD Card Reader") doesn't work, Ricoh own drivers install properly but do the same behaviour. Other 3rd party driver suggestions from forums (Acer and Texas-Instruments FlashMedia) do not seem to install properly. I would post the PCI id if I had it, but it was just showing up as rimsptsk\diskricohmemorystickstorage (while it had the Ricoh Driver installed). Edit: If there are any lower level diagnostic utlities which might shed more light on it I'd welcome hearing of them. Anything which might show get it to put troubleshooting logs in the event log or identify chipsets or whatever... Update: Device details are: SD\VID_03&OID_5344&PID_SD04G&REV_8.0\5&4617BC3&0&0 : SD Memory Card PCI\VEN_8086&DEV_2934&SUBSYS_9025104D&REV_03\3&21436425&0&E8: Intel(R) ICH9 Family USB Universal Host Controller - 2934 PCI\VEN_1180&DEV_0476&SUBSYS_9025104D&REV_BA\4&1BD7BFCD&0&20F0: Ricoh R/RL/5C476(II) or Compatible CardBus Controller RIMSPTSK\DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00\MS0001: SD Storage Card PCI\VEN_1180&DEV_0592&SUBSYS_9025104D&REV_11\4&1BD7BFCD&0&24F0: Ricoh Memory Stick Host Controller WPDBUSENUMROOT\UMB\2&37C186B&1&STORAGE#VOLUME#_??_RIMSPTSK#DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00#MS0001#: O:\ STORAGE\VOLUME\{C82A81B8-5A4F-11E0-AACC-806E6F6E6963}#0000000000100000: Generic volume PCI\VEN_1180&DEV_0822&SUBSYS_9025104D&REV_21\4&1BD7BFCD&0&22F0: SDA Standard Compliant SD Host Controller ROOT\LEGACY_FVEVOL\0000 : Bitlocker Drive Encryption Filter Driver PCI\VEN_1180&DEV_0832&SUBSYS_9025104D&REV_04\4&1BD7BFCD&0&21F0: Ricoh 1394 OHCI Compliant Host Controller Now going to search for drivers for that.

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Oracle VM repository creation seems contradictory to its server pool?

    - by Michael
    I found something contradictory in Oracle VM. Clustered server pool creation in Oracle VM would format my FC LUN as ocfs2 , and start o2cb & ocfs2 services to build cluster environment. After that, when I wanted to create repository on the serverpool, unexpectedly, it told me that the physical disk I chose which is also my FC LUN, already contains a file system. What a contradictory! So what, delete the file system in serverpool? If so, why created it before?! OVM> list physicaldisk Command: list physicaldisk Status: Success Time: 2012-09-10 06:44:42.660 Data: id:0004fb00001800007765e62381895f61 name:OVM_HDS OVM> create serverpool clusterenable=true virtualip=10.84.21.123 physicaldisk=OVM_HDS name=ovmserverpool Serverpool creation took quite a long time since my FC LUN was big. When the creation completed, my FC LUN was created as ocfs2 and o2cb & ocfs2 services were started on my ovm servers successfully. But then repository creation indeed throws me a big surprise ... OVM> create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Command: create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Status: Failure Time: 2012-09-10 06:23:44.656 Error Msg: com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002026E Cannot use or delete physical disk: OVM_HDS, it already contains a file system: [Pool filesystem for ovmserverpool] Mon Sep 10 06:23:44 CST 2012 What should I do now? Delete the filesystem using dd command? That would destroy the serverpool, right? I'm really confused. My OVM Manager version is 3.1.1.399 which is the latest. Any tips are appreciated. Thanks.

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

  • How do you import CA certificates onto an Android phone?

    - by f50driver
    Hi all, I want to connect to my University's wireless using my Nexus One. When I go to "Add Wi-Fi network" in Wireless Settings I fill in the Network SSID and select 802.1x Enterprise for the security and fill everything out. The problem is that our university's wireless uses Thawte Premium Server CA certificate for certification. When I click the drop down list for CA certificate I get nothing in the list (just N/A) Now I have the certificate (Thawte Premium Server CA.pem) and have moved it to my SD card, but it doesn't look like Android automatically detects it. Where should I put the certificate so that the Android wireless manager recognizes it. In other words, how can I import a CA certificate so that Android recognizes that it is on the phone and displays it in the CA Certificate drop down list. Thanks for any help, Tomek P.S. My phone is not rooted EDIT: After doing some research it looks like you are able to install certificates by going to your phone's settings Location & Security Install from SD card Unfortunately it looks like the only accepted file extension is .p12. It does not look like there is a way to import .cer or .pem files (which are the only two files that come with the Thawte certificates) at this moment. It does look like you can use a converter to convert your .cer or .pem files to .p12, however a key file is needed. https://www.sslshopper.com/ssl-converter.html I do not know where to get this key file for the Thawte certificates.

    Read the article

  • System Expandable-String Environment Variables Can’t Reference User Environment Variables

    - by Synetech inc.
    Hi, I’ve run into a bit of a situation with Windows environment variables. I’ve narrowed it down to what may or may not makes sense and/or possibly be by design. It seems that expandable-string environment variables of the local machine cannot reference environment variables of the current user. For example if you’ve got the following environment variables: [HKCU\Environment] "CU"="CU" "CU->LM"="%LM%" [HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment] "LM"="LM" "LM->CU"="%CU%" Then you get the following results: > set CU CU=CU CU->LM=LM > set LM LM=LM LM->CU=%CU% It seems that user variables can expand system variable references, but system variables cannot expand (access?) user variable references. I suppose that it makes sense if you think about it just right (eg like how user vars override/hide system vars of the same name), but it also doesn’t make sense if you think about it in even more ways. So what’s going on? Is there a way to get this to work as expected? Thanks.

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

  • WinXP: Error 1167 -- Device (LPT1) not connected

    - by Thomas Matthews
    I am writing a program that opens LPT1 and writes a value to it. The WriteFile function is returning an error code of 1167, "The device is not connected". The Device Manager shows that LPT1 is present. I have a cable connected between a development board and the PC. The cable converts JTAG pin signals to signals on the parallel port. Power is applied and the cable is connected between the development board and the PC. The development board is powered on. I am using: Windows XP MS Visual Studio 2008, C language, console application, debug environment. Here is the relevant code fragments: HANDLE parallel_port_handle; void initializePort(void) { TCHAR * port_name = TEXT("LPT1:"); parallel_port_handle = CreateFile( port_name, GENERIC_READ | GENERIC_WRITE, 0, // must be opened with exclusive-access NULL, // default security attributes OPEN_EXISTING, // must use OPEN_EXISTING 0, // not overlapped I/O NULL // hTemplate must be NULL for comm devices ); if (parallel_port_handle == INVALID_HANDLE_VALUE) { // Handle the error. printf ("CreateFile failed with error %d.\n", GetLastError()); Pause(); exit(1); } return; } void writePort( unsigned char a_ucPins, unsigned char a_ucValue ) { DWORD dwResult; if ( a_ucValue ) { g_siIspPins = (unsigned char) (a_ucPins | g_siIspPins); } else { g_siIspPins = (unsigned char) (~a_ucPins & g_siIspPins); } /* This is a sample code for Windows/DOS without Windows Driver. */ // _outp( g_usOutPort, g_siIspPins ); //---------------------------------------------------------------------- // For Windows XP and later //---------------------------------------------------------------------- if(!WriteFile (parallel_port_handle, &g_siIspPins, 1, &dwResult, NULL)) { printf("Could not write to LPT1 (error %d)\n", GetLastError()); Pause(); return; } } If you believe this should be posted on Stack Overflow, please migrate it over (thanks).

    Read the article

  • Why does my PowerShell script hang when called in PSEXEC via a batch (.cmd) file?

    - by Kev
    I'm trying to remotely execute a PowerShell script using PSEXEC. The PowerShell script is called via a .cmd batch file. The reason we do this is to change the execution policy, run the powershell script then reset the execution policy again: On the remote server do-tasks.cmd looks like: powershell -command "&{ set-executionpolicy unrestricted}" powershell DoTasks.ps1 powershell -command "&{ set-executionpolicy restricted}" The PowerShell script DoTasks.ps1 just does this for now: Write-Output "Hello World!" Both of these scripts live in c:\windows\system32 (for now) just so they're on the PATH. On the originating server I do this: psexec \\web1928 -u administrator -p "adminpassword" do-tasks.cmd When this runs I get the following response at the command line: c:\Windows\system32>powershell -command "&{ set-executionpolicy unrestricted}" and the script runs no further. I can't ctrl-c to break the script and I just see ^C characters, I can type input from the keyboard and the characters are echoed to console. On the remote server I see that PowerShell.exe and CMD.exe are running in Task Manager's Process tab. If I end these processes then control returns to the command line on the originating server. I have tried this with just a simple .cmd batch file with a @echo hello world and it works just fine. Running do-tasks.cmd on the remote server via an RDP session works ok as well. Why is my remote batch file getting stuck when executing via PSEXEC?

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >