Search Results

Search found 18842 results on 754 pages for 'the machine'.

Page 408/754 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • How can I configure Symantec Endpoint Protection Agent to allow access to windows shares?

    - by Peter Bernier
    I'm having some difficulties exposing a standard windows file share on a Windows Embedded Standard 2009 device that is running Symantec Endpoint Protection Agent 5.1. I'm using simply file sharing to expose a particular directory. That share is visible locally on the machine and externally visible when I disable the endpoint protection agent. I've added a rule (and moved it to the to ensure priority) allowing all hosts access on TDP ports 137,138,138,445 and another rule allowing UDP access on ports 137,138,139. When I try to connect, two endpoint protection dialogs pop up saying: Traffic has been blocked from this application: NWLINK2 IPX Protocol Driver (nwlnkipx.sys) Traffic has been blocked from this application: IPv6 driver (tcpip6.sys) I'm not using IPv6 anywhere. Interestingly, I discovered a workaround in that I can white-list all traffic from the subnet the device is on, which meets my needs, but I'm still curious as to why my original approach wasn't successful. Can anyone suggestion a reason why the above endpoint protection rules won't allow me to access windows file shares on the device?

    Read the article

  • Outlook 2007 unable to connect to exchange server

    - by mattwarren
    I came into the office a week ago and outlook has refused to connect ever since, it just says "Disconnected" in the bottom right-hand corner. I've tried restarting it, rebooting Windows etc. I'm the only one if our office who is having this problem, so it's not a general problem with the server. Things I've tried Pinging the server via IP address and host name, both work fine Connecting via OWA, this works using the same machine name Connecting to Exchange via HTTP ("Outlook Anywhere") doesn't work None of the suggestions in this question helped, http://serverfault.com/questions/21755/can-ping-exchange-server-cant-connect-outlook-to-it Disabling Windows firewall on my laptop also has no effect. There are no items in the event viewer that indicate that anything it up. Also no permissions have changed on the server since when it worked. What else can I do to diagnose this, any suggestions?

    Read the article

  • Error while installing vmware tools v8.8.2 in Ubuntu 12.04 beta

    - by Dipen Patel
    I just upgraded to Ubuntu 12.04 from 11.10 using update manager. I use it as virtual machine on VMWare Player 4.xx. As usual I installed vmware tools to enable full screen mode and shared folder functionality. But while installing I got an error while building modules for shared folder and fast networking utilities for vmware tools. Error is ============================================== /tmp/vmware-root/modules/vmhgfs-only/fsutil.c: In function ‘HgfsChangeFileAttributes’: /tmp/vmware-root/modules/vmhgfs-only/fsutil.c:610:4: error: assignment of read-only member ‘i_nlink’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/fsutil.o] Error 1 make[2]: *** Waiting for unfinished jobs.... /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: initialization from incompatible pointer type [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: (near initialization for ‘HgfsFileFileOperations.fsync’) [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/tcp.c:53:30: error: expected ‘)’ before numeric constant /tmp/vmware-root/modules/vmhgfs-only/tcp.c:56:25: error: expected ‘)’ before ‘int’ /tmp/vmware-root/modules/vmhgfs-only/tcp.c:59:33: error: expected ‘)’ before ‘int’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/tcp.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-22-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' The filesystem driver (vmhgfs module) is used only for the shared folder feature. The rest of the software provided by VMware Tools is designed to work independently of this feature. Let me know if anyone has encountered and solved this problem. Regards, Dipen Patel

    Read the article

  • SSH main process ended

    - by Khaled
    I have a running ubuntu server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is ubuntu server and it should be running and accessed remotely using ssh.

    Read the article

  • Visual Studio 2010 Installation Screenshots, links to installation Guides, Forum

    Today Installed Visual Studio 2010 in my new Sony Vaio laptop. I’ve habit of taking screen shots while setups are running. It helps me if I want to find the items what I installed earlier for that software. but taking screen shots is not required for the software's like Visual Studio as it provides add/remove items at anytime. Below are the screen shorts for the members are you new to Visual Studio installation, it’s pretty much easy and self understandable if you follow the instructions mentioned in installation wizard. I thought it does several system restarts as earlier versions, but VS2010 did not restart the machine. Just it said successfully installed. You might want to refer this link for further assistance. You can also ask your queries in this forum. You can also find installation guide. Happy coding with Visual Studio 2010 :-) You might also want to other articles 27 New Features of .NET Framework 4.0 New features of IIS 7.0 22 New Features of Visual Studio 2008 for .NET Professionals             span.fullpost {display:none;}

    Read the article

  • Windows Backup to network share (Server 2008)

    - by Joe
    I'm trying to setup Windows Backup on a Server 2008 machine to backup to a network share. When I run the wizard to setup the backup I get an error message "The user name being used for accessing the remote share folder is not recognized by the local computer". I have no idea what this means. Help? The server with the network share is a domain controller (also server 2008). The server I am trying to back up is not and is not part of the domain.

    Read the article

  • How do you force Outlook 2007 to re-index it's seach on Windows XP SP 3?

    - by Aaron K
    So I have a Windows XP SP 3 machine which is running Outlook 2007. When I search in Outlook for an email that exists using a basic keyword, like say "MySQL", I get no results. However, Outlook gives me the following message: Search results may be incomplete because items are still being indexed. Click here for more details. When I click, I get the following: Outlook is currently indexing your items. Search results may be incomplete because items are still being indexed. 8783 items remaining in "Mailbox - USER" 8812 items remaining across all open mailboxes. The thing is, these are the numbers it has been reporting for several days, and Outlook is open for 8 hours a day. It does not seem like the index is working. As best I can tell, the index seemed to stop about 3 weeks ago. How can I force Outlook 2007 to re-index everything and start working properly again?

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Prevent a partition on a USB drive auto-mounting in Linux

    - by nomount
    On Linux (Gnome desktop) how do you prevent one of the partitions on an external USB drive auto-mounting when it attached to the machine? I don't just want to prevent the Nautilus window from popping up -- I want that partition not to mount. Fiddling with /etc/fstab is not acceptable, as this is a removable drive that is attached to different machines. I seem to remember that you create a hidden file in the root of the file system, but I can't remember what it's called. Something like: touch /media/usbdisk/.no-mount How do you actually make this work?

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Upgrading Sharepoint MOSS 2007 Farm to Sharepoint 2010 "waiting to get a lock to upgrade the farm"

    - by Wes Weeks
    My first inplace upgrade of a MOSS 2007 farm to sharepoint went pretty smooth. I read the preupgrade documentation and was comfortable with the steps.  Since it was a fairly new installation of Moss changes were minimal and I wasn't anticipating too many problems The one issue I got was after installing the software on all of the farm.  I went to the first machine which ran Sharepoint 2010 central administration and ran the Sharepoint 2010 Products Configuration Wizard.  I received the message that I would need to run the configuration on each server in the farm.  Fair enough, I expected as much. The wizard completed without issue on the first server, but when I tried to run it on the others it hung with a "waiting to get a lock to upgrade the farm" message.  It hung for about 10 minutes and then the wizard failed.  Did a few searches on Google and Bing and got 0 results for that message.  None, Nothing, Zilch.  I'm on my own... For grins, hit the help button on the configuration wizard and it seemed to indicate that the configuration wizard needed to be run on all farm servers simultaneously.  I started it again on the first server to the point I got the message about needing to be run on all servers on the farm and then started the wizard on the other servers and ran it to that point as well.  I then clicked ok on the first server and then the subsuquent servers. It took a while and it did hang on the lock message for some time, but then it did kick off and completed succesfully on all of them.  Yeah! Hope this helps someone else!  Now there should be at least one post with this error message on it!

    Read the article

  • OpenNebula: [HostPoolInfo] User couldn't be authenticated, aborting call

    - by ulf
    I installed OpenNebula 3.2.1 following the guide found under http://opennebula.org/documentation:rel3.2:ignc on a Debian 6.0.4 machine. Everything seemed fine until trying to execute the command onevm list Then I always get this: oneadmin@opennebula-master:~$ onevm list [VirtualMachinePoolInfo] User couldn't be authenticated, aborting call. The file one_auth exists. I even gave the oneadmin user a password although it doesn't seem to be required according to the guide. I copied the password hash from /etc/shadow to the one_auth file. Still no success. Any ideas are appreciated.

    Read the article

  • OpenVPN DNS: VPN DNS stomping local VPN

    - by Eddie Parker
    I've finally noodled with OpenVPN enough to get it working. Even better, I can mount samba drives, ping network machines through the TUN device, etc - it's all great. However, I'm noticing that if I have the directive: push "dhcp-option DNS 10.0.1.1" # Push our local DNS to clients Then some of the machines that are normally visible by the client, on the client's side (i.e., not through the VPN) get masked with some other server out on the Internet. Is there any way to avoid this, besides hacking the 'hosts' file on the client machine? Ideally I'd like to only use my VPN's DNS for machines within that domain.

    Read the article

  • SQL SERVER – Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video

    - by pinaldave
    One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. I have earlier written an article about this which describes how to resolve the errors which are related to SQL Server connection. Even though the step by step directions are pretty simple there are many first time IT Professional who are not able to figure out how to resolve this error. I have quickly built a video which is covering most of the solutions related to resolving the connection error. In the Fix SQL Server Connection Error article following workarounds are described: SQL Server Services TCP/IP Settings Firewall Settings Enable Remote Connection Browser Services Firewall exception of sqlbrowser.exe Recreating Alias Related Tips in SQL in Sixty Seconds: SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: ) SQL SERVER – Could not connect to TCP error code 10061: No connection could be made because the target machine actively refused it SQL SERVER – Connecting to Server Using Windows Authentication by SQLCMD SQL SERVER – Fix : Error: 15372 Failed to generate a ser instance od SQL Server due to a failure in starting the process for the user instance. The connection will be closed SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port. SQL SERVER – Fix : Error: 4064 – Cannot open user default database. Login failed. Login failed for user What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • AWS RDS (SQL Server): SSL Connection - The target principal name is incorrect

    - by AX1
    I have a Amazon Web Services (AWS) Relational Database Service (RDS) instance running SQL Server 2012 Express. I've installed Amazon's aws.amazon.com/rds certificate in the client machine's Trusted Root Certification Authorities store. However, when I connect to the RDS instance (using SQL Server Management Studio 2012) and check off "Encrypt Connection", I get the following error: A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The target principal name is incorrect.) (Microsoft SQL Server) What does this mean, and how can I fix it? Thanks!

    Read the article

  • Microsoft launches IE9 preview – No support for XP

    - by samsudeen
    Microsoft launched the developer preview version of Internet Explorer 9 (IE9) at MIX 10 web conference yesterday.This release is aimed getting the feedback from website designers , developers and other community to make IE9 development better from its previous versions. Microsoft will update the developer preview every eight weeks and the next update is expected on mid of march.So what is new and interesting  about IE9 Chakra Chakra (The new scripting engine of IE9) renders the Java script much faster compared to IE8 and other browsers thus improving the performance significantly.According to Microsoft Chakra renders the java script in background with a separate thread parallel to the main engine which is complete new way of rendering from the current browser technologies Standards Microsoft is desperate to make ( surprisingly!!!) IE9 compliance to  web standards by supporting the open standards such as Accelerated support for HTML5 video support for new web technologies such as CSS3 and SVG2. ACID3 Test IE9 scores (55/100) in its latest ACID3 test which is much better compared to the IE8 score (22/100) but not even  nearer to their rivals Chrome, Opera, and Safari which scores 100/100 in ACID3 testing I am little disappointed over not able to download the  developer preview on my XP machine. The early comments looks much positive for IE9.If you want to explore IE9,check the Microsoft Test drive site  at Microsoft IE9 Test-drive You can also download the IE9 developer preview at Download Preview Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • MSTSC crash on connect

    - by rotard
    We are supporting a client with primarily Windows XP machines. The users need to use Remote Desktop to connect to a terminal server. Unfortunately, after upgrading to SP3 on some machines MSTSC.exe crashes when they try to connect to the terminal server (a Win 2008 machine). The resolution I have found has been to revert to an older version of MSTSC as described here: http://it.tmod.pl/Blog/EntryId/115/Remote-Desktop-Connection-crashes.aspx . Another tech at my company independently arrived at a similar solution. Unfortunately, now some of the user's printers are missing (when connected tot he terminal server). Has anyone else seen this issue? How did you resolve it?

    Read the article

  • Upgrade Intel Xeon Prestonia to a 64-bit processor

    - by IDisposable
    In theory, could I upgrade a mPGA604-socket motherboard with a Prestonia processor to some Intel Xeon processor with 64-bit? I've got a Dell PowerEdge 1750 with dual 2.8GHz Xeon processors running my Windows Home Server machine. I want to upgrade to the upcoming Vail release, but it is 64-bit only. The processors are Prestonia-core, which is pre-64bit, but I was wondering if it was possible to swap in some pin-compatible later generation processor. According to wikipedia, the mPGA604-socket continues to be used for several later generations that do have same pinout. So, IN THEORY, could I swap in a 64-bit, like a Nocona-core?

    Read the article

  • PHP 5.3.1 Undefined Symbol: OnUpdateLong error on Apache Startup

    - by docgnome
    I'm running Ubuntu 8.04 on this server. I had PHP 5.2 installed via the package manager. I removed it to install PHP 5.3.1 by hand. I built the packages like so ./configure --prefix=/opt/php --with-mysql --with-curl=/usr/bin --with-apxs2=/usr/bin/apxs2 make make install This installed PHP 5.3.1 in /opt/php/ $ php -v PHP 5.3.1 (cli) (built: Dec 7 2009 10:51:14) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies However, when I try to start Apache I get this. # /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Syntax error on line 185 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: undefined symbol: OnUpdateLong [fail] Any ideas what's causing this error? All the references I can see have to do with building php5 packages for php4 or the like. PHP4 has never been installed on this machine.

    Read the article

  • Can't Remote Desktop to server after rebooting via Remote Desktop

    - by sh-beta
    When I reboot a Windows 2003 or Windows 2008 server via a Remote Desktop connection, the server comes back up and will not accept any RDP connections: the RDP client errors out with "Connection Refused." The Terminal Services service is running on the server and restarting it has no effect. No errors are logged on the server. The only way I've found to fix this is to login at the console or via the DRAC and reboot the machine again, which is an ugly solution for obvious reasons. Has anyone run into this before?

    Read the article

  • How to Install Mac OS X Lion on Your HP ProBook (or Compatible Laptop)

    - by Usman
    There’s nothing more satisfying than building a hackintosh, i.e. installing Mac OS X on a non-Apple machine. Although it isn’t as easy as it sounds, but the end result is worth the effort. Building a PC with specific components and installing Mac OS X on it can save you thousands of dollars you might spend on a real Mac. And now, it’s time to step into the portable world. Today we will show how you can turn an HP ProBook (or any compatible Sandy Bridge laptop) into a 95% MacBook Pro! Why should (or shouldn’t) you do it? Let’s clarify whether or not it should be done. Firstly, we all know Apple makes awesome laptops. The design, build quality, and the aesthetics (not to mention, the glowing Apple) would make you crave for one. Secondly, all these Apple laptops are bundled with Mac OS X, which (for some people) is the most user-friendly and annoyance-free operating system. Digital artists, musicians, video editors, they all prefer Mac for a reason. So the verdict is, if hardware design is what you really look for, you should get a real Mac, and we are not at all stopping you from doing so. But if you’re only concerned with the OS (and saving a few bucks in your pocket), you may consider giving this a shot. But remember, it may not perform as good as a real Mac does. The results vary, so hope for the best, and proceed with caution. Why HP ProBook? How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers

    Read the article

  • Hard disk with bad clusters

    - by Dan
    I have been trying to backup some files up to DVD recently, and the burn process failed saying the CRC check failed for certain files. I then tried to browse to these files in Windows explorer and my whole machine locks up and I have to reboot. I ran check disk without the '/F /R' arguments and it told me I had bad sectors. So I re-ran it with the arguments and check disk fails during the 'Chkdsk is verifying usn journal' stage with this error: Insufficient disk space to fix the usn journal $j data stream The hard disk is a 300GB Partition on a 400GB Disk, and there is 160GBs of free space on the partition. My os (Windows 7) is installed on the other partition and is running fine. Any idea how I fix this? or repair it enough to copy my files off it?

    Read the article

  • Messed up installation of mysql-server - cannot complete installation or deinstallation

    - by Christian Engel
    apt-get got stuck while installing mysql-server. I don't know why but it just stopped working and never continued. I had to reboot the machine in the middle of the setup process. Now, if I try to install or purge the mysql-server package, apt-get tries to configure mysql-server first (tells me its not installed before that) and cancels with a error message: Sub-process /usr/bin/dpkg returned an error code(1) apt-get also tells me that two packages have not been successfully installed or removed. this is the complete console output: christian@devbox:~$ sudo apt-get install mysql-server [sudo] password for christian: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.5 (5.5.32-0ubuntu7) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) christian@devbox:~$

    Read the article

  • Vmconnect.exe on Windows7

    - by Xiuhtecuhtli
    I running Windows Server 2008 w/Hyper-V. The Network properly works and I can access the VM's From a Hyper-V enabled 2008 Machine. No problems there. BUT... I want to be able to connect to my Guest Machines from my Windows7 Pro Laptop. I was hoping it would be simular to copy/pasting VMCONNECT.EXE to my sys32 folder. But.. It Failed saying i did not have the proper feature installed. My Question to the Guru's of SuperUser, Is their a way to run VMCONNECT.EXE from a Windows7 client?

    Read the article

  • HP D530 Startup Error: 512 - Chassis Fan Not Detected

    - by lyrikles
    I'm using the HP D530 Motherboard/CPU that I installed in a new case with a 600W PSU. There was a problem with the onboard chassis fan connector (3-wire) not supplying sufficient power to the chassis fan indicated by the fan spinning very slowly, but I never experienced the "512 Error" at boot. Also, the same fan works perfectly connected directly to the PSU. I disconnected it since I already have plenty of fans connected via the PSU directly. Since then, on startup, I get the error: "512 - Chassis Fan Not Detected" and am asked to "Press F1 to continue". This gets quite annoying since I use this machine remotely (w/ FreeNAS). What could be causing the onboard fan connector to not be giving enough power? If this is unable to be corrected, how can I make the BIOS think there's a chassis fan plugged in without actually plugging a fan into the onboard connector? Would it be possible to jumper the pins without damaging the motherboard or PSU? Thanks,Erik

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >