Search Results

Search found 101466 results on 4059 pages for 'windows net'.

Page 29/4059 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Windows Phone 7 development: Using isolated storage

    - by DigiMortal
    In my previous posting about Windows Phone 7 development I showed how to use WebBrowser control in Windows Phone 7. In this posting I make some other improvements to my blog reader application and I will show you how to use isolated storage to store information to phone. Why isolated storage? Isolated storage is place where your application can save its data and settings. The image on right (that I stole from MSDN library) shows you how application data store is organized. You have no other options to keep your files besides isolated storage because Windows Phone 7 does not allow you to save data directly to other file system locations. From MSDN: “Isolated storage enables managed applications to create and maintain local storage. The mobile architecture is similar to the Silverlight-based applications on Windows. All I/O operations are restricted to isolated storage and do not have direct access to the underlying operating system file system. Ultimately, this helps to provide security and prevents unauthorized access and data corruption.” Saving files from web to isolated storage I updated my RSS-reader so it reads RSS from web only if there in no local file with RSS. User can update RSS-file by clicking a button. Also file is created when application starts and there is no RSS-file. Why I am doing this? I want my application to be able to work also offline. As my code needs some more refactoring I provide it with some next postings about Windows Phone 7. If you want it sooner then please leave me a comment here. Here is the code for my RSS-downloader that downloads RSS-feed and saves it to isolated storage file calles rss.xml. public class RssDownloader {     private string _url;     private string _fileName;       public delegate void DownloadCompleteDelegate();     public event DownloadCompleteDelegate DownloadComplete;       public RssDownloader(string url, string fileName)     {         _url = url;         _fileName = fileName;     }       public void Download()     {         var request = (HttpWebRequest)WebRequest.Create(_url);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);            }       private void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           using(var stream = response.GetResponseStream())         using(var reader = new StreamReader(stream))         using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication())         using(var file = appStorage.OpenFile("rss.xml", FileMode.OpenOrCreate))         using(var writer = new StreamWriter(file))         {             writer.Write(reader.ReadToEnd());         }           if (DownloadComplete != null)             DownloadComplete();     } } Of course I modified RSS-source for my application to use rss.xml file from isolated storage. As isolated storage files also base on streams we can use them everywhere where streams are expected. Reading isolated storage files As isolated storage files are opened as streams you can read them like usual files in your usual applications. The next code fragment shows you how to open file from isolated storage and how to read it using XmlReader. Previously I used response stream in same place. using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.Open)) {     var reader = XmlReader.Create(file);                      // more code } As you can see there is nothing complex. If you have worked with System.IO namespace objects then you will find isolated storage classes and methods to be very similar to these. Also mention that application storage and isolated storage files must be disposed after you are not using them anymore.

    Read the article

  • How to fix: Handler “PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list

    - by ybbest
    Issue: Recently, I am having issues with deploying asp.net mvc 4 application to Windows Server 2008 R2.After add the necessary role and features and I setup an application in IIS. However , I received the following error message: PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list   Solution: It turns out that this is because ASP.Net was not completely installed with IIS even though I checked that box in the “Add Feature” dialog.   To fix this, I simply ran the following command at the command prompt %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i If I had been on a 32 bit system, it would have looked like the following: %windir%\Microsoft.NET\Framework\v4.0.21006\aspnet_regiis.exe –i   References: http://stackoverflow.com/questions/6846544/how-to-fix-handler-pagehandlerfactory-integrated-has-a-bad-module-managedpip

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • Unobtrusive Client Side Validation with Dynamic Contents in ASP.NET MVC 3

    - by imran_ku07
        Introduction:          A while ago, I blogged about how to perform client side validation for dynamic contents in ASP.NET MVC 2 at here. Using the approach given in that blog, you can easily validate your dynamic ajax contents at client side. ASP.NET MVC 3 also supports unobtrusive client side validation in addition to ASP.NET MVC 2 client side validation for backward compatibility. I feel it is worth to rewrite that blog post for ASP.NET MVC 3 unobtrusive client side validation. In this article I will show you how to do this.       Description:           I am going to use the same example presented at here. Create a new ASP.NET MVC 3 application. Then just open HomeController.cs and add the following code,   public ActionResult CreateUser() { return View(); } [HttpPost] public ActionResult CreateUserPrevious(UserInformation u) { return View("CreateUserInformation", u); } [HttpPost] public ActionResult CreateUserInformation(UserInformation u) { if(ModelState.IsValid) return View("CreateUserCompanyInformation"); return View("CreateUserInformation"); } [HttpPost] public ActionResult CreateUserCompanyInformation(UserCompanyInformation uc, UserInformation ui) { if (ModelState.IsValid) return Content("Thank you for submitting your information"); return View("CreateUserCompanyInformation"); }             Next create a CreateUser view and add the following lines,   <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> CreateUser </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div id="dynamicData"> <%Html.RenderPartial("CreateUserInformation"); %> </div> </asp:Content>             Next create a CreateUserInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Information </td> </tr> <tr> <td> First Name </td> <td> <%=Html.TextBoxFor(a => a.FirstName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.FirstName)%> </td> </tr> <tr> <td> Last Name </td> <td> <%=Html.TextBoxFor(a => a.LastName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.LastName)%> </td> </tr> <tr> <td> Email </td> <td> <%=Html.TextBoxFor(a => a.Email)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Email)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="submit" name="userInformation" value="Next"/> </td> </tr> </table> <%} %> <script type="text/javascript"> $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a CreateUserCompanyInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserCompanyInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserCompanyInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Company Information </td> </tr> <tr> <td> Company Name </td> <td> <%=Html.TextBoxFor(a => a.CompanyName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyName)%> </td> </tr> <tr> <td> Company Address </td> <td> <%=Html.TextBoxFor(a => a.CompanyAddress)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyAddress)%> </td> </tr> <tr> <td> Designation </td> <td> <%=Html.TextBoxFor(a => a.Designation)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Designation)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="button" id="prevButton" value="Previous"/>   <input type="submit" name="userCompanyInformation" value="Next"/> <%=Html.Hidden("FirstName")%> <%=Html.Hidden("LastName")%> <%=Html.Hidden("Email")%> </td> </tr> </table> <%} %> <script type="text/javascript"> $("#prevButton").click(function () { $.post('<%= Url.Action("CreateUserPrevious")%>', $($("form")[0]).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); }); $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserCompanyInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { public int Id { get; set; } [Required(ErrorMessage = "First Name is required")] [StringLength(10, ErrorMessage = "First Name max length is 10")] public string FirstName { get; set; } [Required(ErrorMessage = "Last Name is required")] [StringLength(10, ErrorMessage = "Last Name max length is 10")] public string LastName { get; set; } [Required(ErrorMessage = "Email is required")] [RegularExpression(@"^\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$", ErrorMessage = "Email Format is wrong")] public string Email { get; set; } }             Next create a new class file UserCompanyInformation.cs inside Model folder and add the following code,    public class UserCompanyInformation { public int UserId { get; set; } [Required(ErrorMessage = "Company Name is required")] [StringLength(10, ErrorMessage = "Company Name max length is 10")] public string CompanyName { get; set; } [Required(ErrorMessage = "CompanyAddress is required")] [StringLength(50, ErrorMessage = "Company Address max length is 50")] public string CompanyAddress { get; set; } [Required(ErrorMessage = "Designation is required")] [StringLength(50, ErrorMessage = "Designation max length is 10")] public string Designation { get; set; } }            Next add the necessary script files in Site.Master,   <script src="<%= Url.Content("~/Scripts/jquery-1.4.4.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")%>" type="text/javascript"></script>            Now run this application. You will get the same behavior as described in this article. The key important feature to note here is the $.validator.unobtrusive.parse method, which is used by ASP.NET MVC 3 unobtrusive client side validation to initialize jQuery validation plug-in to start the client side validation process. Another important method to note here is the jQuery.valid method which return true if the form is valid and return false if the form is not valid .       Summary:          There may be several occasions when you need to load your HTML contents dynamically. These dynamic HTML contents may also include some input elements and you need to perform some client side validation for these input elements before posting thier values to server. In this article I shows you how you can enable client side validation for dynamic input elements in ASP.NET MVC 3. I am also attaching a sample application. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • How to Sync Any Folder With SkyDrive on Windows 8.1

    - by Chris Hoffman
    Before Windows 8.1, it was possible to sync any folder on your computer with SkyDrive using symbolic links. This method no longer works now that SkyDrive is baked into Windows 8.1, but there are other tricks you can use. Creating a symbolic link or directory junction inside your SkyDrive folder will give you an empty folder in your SkyDrive cloud storage. Confusingly, the files will appear inside the SkyDrive Modern app as if they were being synced, but they aren’t. The Solution With SkyDrive refusing to understand and accept symbolic links in its own folder, the best option is probably to use symbolic links anyway — but in reverse. For example, let’s say you have a program that automatically saves important data to a folder anywhere on your hard drive — whether it’s C:\Users\USER\Documents\, C:\Program\Data, or anywhere else. Rather than trying to trick SkyDrive into understanding a symbolic link, we could instead move the actual folder itself to SkyDrive and then use a symbolic link at the folder’s original location to trick the original program. This may not work for every single program out there. But it will likely work for most programs, which use standard Windows API calls to access folders and save files. We’re just flipping the old solution here — we can’t trick SkyDrive anymore, so let’s try to trick other programs instead. Moving a Folder and Creating a Symbolic Link First, ensure no program is using the external folder. For example, if it’s a program data or settings folder, close the program that’s using the folder. Next, simply move the folder to your SkyDrive folder. Right-click the external folder, select Cut, go to the SkyDrive folder, right-click and select Paste. The folder will now be located in the SkyDrive folder itself, so it will sync normally. Next, open a Command Prompt window as Administrator. Right-click the Start button on the taskbar or press Windows Key + X and select Command Prompt (Administrator) to open it. Run the following command to create a symbolic link at the original location of the folder: mklink /d “C:\Original\Folder\Location” “C:\Users\NAME\SkyDrive\FOLDERNAME\” Enter the correct paths for the exact location of the original folder and the current location of the folder in your SkyDrive. Windows will then create a symbolic link at the folder’s original location. Most programs should hopefully be tricked by this symbolic location, saving their files directly to SkyDrive. You can test this yourself. Put a file into the folder at its original location. It will be saved to SkyDrive and sync normally, appearing in your SkyDrive storage online. One downside here is that you won’t be able to save a file onto SkyDrive without it taking up space on the same hard drive SkyDrive is on. You won’t be able to scatter folders across multiple hard drives and sync them all. However, you could always change the location of the SkyDrive folder on Windows 8.1 and put it on a drive with a larger amount of free space. To do this, right-click the SkyDrive folder in File Explorer, select Properties, and use the options on the Location tab. You could even use Storage Spaces to combine the drives into one larger drive. Automatically Copy the Original Files to SkyDrive Another option would be to run a program that automatically copies files from another folder on your computer to your SkyDrive folder. For example, let’s say you want to sync copies of important log files that a program creates in a specific folder. You could use a program that allows you to schedule automatic folder-mirroring, configuring the program to regularly copy the contents of your log folder to your SkyDrive folder. This may be a useful alternative for some use cases, although it isn’t the same as standard syncing. You’ll end up with two copies of the files taking up space on your system, which won’t be ideal for large files. The files also won’t be instantly uploaded to your SkyDrive storage after they’re created, but only after the scheduled task runs. There are many options for this, including Microsoft’s own SyncToy, which continues to work on Windows 8. If you were using the symbolic link trick to automatically sync copies of PC game save files with SkyDrive, you could just install GameSave Manager. It can be configured to automatically create backup copies of your computer’s PC game save files on a schedule, saving them to SkyDrive where they’ll be synced and backed up online. SkyDrive support was completely rewritten for Windows 8.1, so it’s not surprising that this trick no longer works. The ability to use symbolic links in previous versions of SkyDrive was never officially supported, so it’s not surprising to see it break after a rewrite. None of the methods above are as convenient and quick as the old symbolic link method, but they’re the best we can do with the SkyDrive integration Microsoft has given us in Windows 8.1. It’s still possible to use symbolic links to easily sync other folders with competing cloud storage services like Dropbox and Google Drive, so you may want to consider switching away from SkyDrive if this feature is critical to you.     

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Windows not booting in dual boot with Ubuntu 12.04 and Windows 7

    - by Rupa
    I have dual Ubuntu 12.04 and Windows 7 installed in my PC. I installed Ubuntu after Windows, and I have issues with GRUB. After installing Ubuntu, there was no boot loader in the start up, with an error message about missing OS. I tried boot repair, I can see the GRUB loader now and can access Ubuntu, uut I am not able to access Windows, even though I can see that in GRUB loader. I tried to fix the Windows start up with my Windows Live CD, but that removed the GRUB. What should I do in this case?

    Read the article

  • installing ubuntu 12.04 along windows xp and windows 7

    - by Anand A J
    I have Windows XP installed on C drive and Windows 7 installed on F drive. I want to install Ubuntu 12.04 alongwith Windows (keeping both XP and 7) in drive G with out losing any data stored in the computer. I have a hard disk of 500 GB size with C (14.8 GB left),D,E,F, and G (15.7 GB left). I tried to install Ubuntu 12.04 from DVD and getting stuck at the time of selecting partitions .! How to select the device for boot loader installation? Will the installation of Ubuntu into G drive affects the data stored in the hard disk or in G drive especially? After installing Ubuntu can I use Windows XP and Windows 7? This is my first attempt to use Ubuntu. Can any body help me please?

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Free .NET Training at DevCare in Dallas...

    - by [email protected]
    Come take an early look at the debugging experience in VS 2010 this Friday (3/25/2010) at TekFocus in Dallas, at the InfoMart, at 9 AM: In this session, we’ll … Dive deep into the new IntelliTrace (formerly, historical debugging) feature, which enables you to step back in time within your debugging session and inspect or re-execute code, without having to restart your application See how to manage large numbers of breakpoints with labeling, searching and filtering Extend “data tips” by adding comments, notes and strategically “pinning” these resources to maintain their visibility throughout your session Demonstrate “collaborative debugging,“ by debugging a portion of an application and then exporting breakpoints and labeled data tips, so that others can leverage your effort, without having to start over Leverage these new debugging features in applications built in earlier versions of the .NET Framework through the MultiTargeting features available in VS 2010 You’ll walk-away with a clear understanding of how you can use this upcoming technology to vastly increase your productivity and build better software.Register to attend ==>  http://www.dallasdevcares.com/upcoming-sessions/ DevCares is a monthly series of FREE half-day events sponsored by TekFocus and Microsoft. Targeted specifically at developers, the content is presented by experts on a variety of .NET topics. These briefings include expert testimonials, working demos and sample code designed to help you get the most out of application development with .NET. Events are held on the last Friday of each month at the TekFocus offices in the Infomart near downtown Dallas.TekFocus is a full-service technology training provider with a core business delivering Microsoft-certified technical training and product skills enhancements to customers worldwide    

    Read the article

  • New Release: ImageGlue 7.0 .NET

    When it comes to manipulating images dynamically there are few toolkits that can compete with ImageGlue 6 in terms of versatility and performance. With extensive support for a huge range of graphic formats including JPEG2000, Very Large TIFF Support™, and fully multi-threaded processing, ImageGlue has proved a popular choice for use in ASP and ASP.NET server environments. Now ImageGlue 7 has arrived, introducing support for 64-bit systems, improved PostScript handling, and many other enhancements. We've also used the opportunity to revise the API, to make it more friendly and familiar to .NET coders. But don't worry about rewriting legacy code - you'll find the 'string parameter' interface is still available through the WebSupergoo.ImageGlue6 namespace. So what's new in ImageGlue 7.0? Support for 64-bit systems. ImageGlue now incorporates the PostScript rendering engine as used by ABCpdf, our PDF component, which has proven to be fast, robust and accurate. This greatly improves support for importing and exporting PS, EPS, and PDF files, and also enables you to make use of powerful PostScript drawing operations for drawing to canvas. Leveraging ABCpdf's powerful vector graphics import and export functionality also makes it possible to interoperate with XPS and MS Office documents. An improved API with new classes, methods and properties, more in keeping with normal .NET development. Plus of course the usual range of bug fixes and minor enhancements. span.fullpost {display:none;}

    Read the article

  • Networking DOS within Windows 7 XP Mode, with a Windows XP/7 Networked Share

    - by theonlylos
    For awhile now, one of my clients has been stuck with Corel Paradox 4.0 (it used to be the biggest database system in the DOS days, until Microsoft released Access in the early 90's) so for awhile I've managed to keep it on life support on Windows XP for a few years, however since switching to Windows 7 x64, I've had to resort to using XP Mode as the sandbox to keep it up and running. While I am able to run Paradox as usual in XP Mode, I'm having a serious issue where if I try connecting the install to the network share (which is located on the Windows 7 portion of the system), Paradox keeps exiting because it says the serial number is invalid. Now, I know for a fact that this is an issue with the virtual loopback adapter and also having the VM linked to the physical ethernet adapter -- and while I have solved this issue before, most of my fixes have been bandages since after a few weeks the issue pops up again. Long story short, I wanted to ask if there is a permanent way to link a DOS program to a network share address. For example, when I try doing \tsclient\paradox (the Windows 7 Address) I keep getting an error saying I need a valid network address. I've tried mapping that folder to various drive letters such as P:\Paradox -- but for some reason that keeps failing over time. For what it's worth, Paradox uses a .SOM file to store the network settings, however it isn't editable in Notepad but rather it's controlled by a wizard in Paradox. But if that extension rings any bells, I'd welcome any insights.

    Read the article

  • Windows 7 Can't Connect to Network Drive on Windows XP

    - by Alex Yan
    I have a Windows XP desktop and a Windows 7 laptop both connected to a TrendNET TEW-432BRP router, which is connected to the Internet. They both have static IPs. The desktop has an external hard drive connected to it. The laptop is wireless and the desktop is wired. I enabled sharing on the external hard drive about two years ago when I bought it. I mapped it as a network drive on the laptop. I think it was yesterday, the laptop just stopped recognizing any of the computers on my network (When I open network, my laptop's the only one on it). I also get an error message "An error occurred while connecting A: to \CERTIFIED-DATA\Expansion Microsoft Windows Network: The network path was not found. The connection has not been restored" when I try to connect to the network drive. Both computers run Avast, and there hasn't been any problems with it. This has happened before but I never figured out why and how to fix it. It's usually fixed when I reinstall the OS of the affected system. Edit: I can't navigate the computer using \\CERTIFIED-DATA. I get a message saying "Windows cannot access \CERTIFIED-DATA. Check the spelling of the name, Otherwise, there might be a problem with your network" I clicked diagnose on the message and it failed to find anything wrong I clicked diagnose on my wireless connection, and it just keeps trying to check if something is wrong with the connection I can ping it successfully

    Read the article

  • Windows Error Reporting and IIS7 on Windows Server 2008

    - by graffic
    In a windows webservere I'm trying to get a memory dump of a failing IIS 7 worker process (w3wp.exe) with no avail. In the Event Viewer I get the following. Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000005c22 Faulting process id: 0x1cac Faulting application start time: 0x01cc23419da54772 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: b54ec4f8-8fa4-11e0-ab62-005056810035 Even if I've configured LocalDumps for WER, and specifically for w3wp.exe in the registry. I get another event telling me that there is a report here: *C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cdb8af6deb381574fe9fb0dc9aa3edaad59acd5f_cab_4fbf9b53* It contains the following files: WERD931.tmp.appcompat.txt WERDFE9.tmp.WERInternalMetadata.xml WER99EF.tmp.WERDataCollectionFailure.txt The "depressing one" is the WERDataCollectionFailure that says: Heap dump generation failed: 0x8007012b Mini dump generation failed: 0x8007001f After many tries, lots of msdn documentation and many failed google search. I'm out of ideas on how to get a dump here. Does anyone have any suggestion on how to make WER work? Thank you in advance for your time reading this :)

    Read the article

  • Windows Live Messenger for Mac?

    - by studiohack23
    I have a friend who uses a Mac, and was wondering if there is a version of Windows Live Messenger for Mac? Or something comparable that uses/takes advantage of the Windows LIVE ID? I'm interested in recommendations, as well as "is there a Mac version of Live Messenger? Thanks!

    Read the article

  • Windows Vista and 7 crossrealm authentication MIT Kerberos

    - by fox8
    I'm using Windows Server 2008 and Windows Vista and 7 for cross realm authentication using MIT Kerberos 1.6 but when i try to login with a user the KDC answers: (wireshark output) error_code: KRB5KDC_ERR_ETYPE_NOSUPP (14) ... e-text: BAD_ENCRYPTION_TYPE I want to know how can I change the encryption type method to be compatible with the KDC (i tried a XP client and it worked fine). Many thanks!

    Read the article

  • Removing Windows 2003 Server and Domain from Network

    - by Mike Ruford
    I have a Windows-2003-Server with five computers (Windows XP) accessing it using a domain. I would like to remove the server and the domain and go straight to peer to peer (file sharing via a NAS). We are moving our email to an Exchange Hosting Provider. We no longer need people to log in from different computers and have their desktops available from any computer on the network (however, I do want to get those files off of this setup). Any suggestions?

    Read the article

  • Customise date-time format in Windows.

    - by infant programmer
    Is it possible to customize data (or date-time) format in Windows [I am using windows XP]? The current format which is followed by the OS [to show date-modified, etc.] is MM/DD/YYYY or M/D/YYYY, whereas I have been comfortable with DD/MM/YYYY or D/M/YYYY format. I am finding it hard to refer Date-modified [which I use often] of files and folders.

    Read the article

  • Windows SteadyState - system's security log is full

    - by Matt
    Quick version: New computer, attached to Windows domain, with SteadyState w/ Disk Protection turned on, cannot log on as domain user because Windows states 'system security log is full' Troubleshooting performed: disabled all 'restrictions' listed in SteadyState, cleared system security log, changed security log settings to overwrite entries when it becomes full, restarted computer to commit changes, verified changes were commited - still cannot log on as domain user, changed Documents and Settings folder to another partition, still cannot log on as domain user Let me know if you need a more detailed description of any steps performed. I appreciate any help you can give me.

    Read the article

  • How to search inside files in Windows 7?

    - by Revolter
    In Windows XP we can search for files witch contains a defined keyword (inside all files types) Windows 7 can look inside files for a keywords, okay, but only for text files. (*.doc,*.txt, *.inf, ...), not (*.conf, *.dat, *.*, ...) Microsoft search filters don't contain any filter I can use for this. Any idea?

    Read the article

  • Alternatives to Remote Storage Service under Windows Server 2008 R2

    - by ObligatoryMoniker
    I am working on setting up a new Windows Server 2008 R2 file server for our organization and felt like the functionality offered by the Remote Storage Service in previous versions of Windows would meet our needs for segmenting our data so that we can have different backup schedules for different tiers of data based on the frequency of that data being used and updated. What software exists that provide this same or similar functionality for Server 2008 R2?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >