Search Results

Search found 14297 results on 572 pages for 'self tracking entities'.

Page 557/572 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • CodePlex Daily Summary for Thursday, November 01, 2012

    CodePlex Daily Summary for Thursday, November 01, 2012Popular ReleasesAccess 2010 Application Platform - Build Your Own Database: Application Platform - 0.0.1: Initial Release This is the first version of the database. At the moment is all contained in one file to make development easier, but the obvious idea would be to split it into Front and Back End for a production version of the tool. The features it contains at the moment are the "Core" features.Python Tools for Visual Studio: Python Tools for Visual Studio 1.5: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. For a quick overview of the general IDE experience, please watch this video There are a number of exciting improvement in this release comp...Devpad: 4.25: Whats new for Devpad 4.25: New Theme support New Export Wordpress Minor Bug Fix's, improvements and speed upsAssaultCube Reloaded: 2.5.5: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: Fixed potential bot bugs: Map change, OpenAL...Edi: Edi 1.0 with DarkExpression: Added DarkExpression theme (dialogs and message boxes are not completely themed, yet)MCEBuddy 2.x: MCEBuddy 2.3.6: Changelog for 2.3.6 (32bit and 64bit) 1. Fixed a bug in multichannel audio conversion failure. AAC does not support 6 channel audio, MCEBuddy now checks for it and force the output to 2 channel if AAC codec is specified 2. Fixed a bug in Original Broadcast Date and Time. Original Broadcast Date and Time is reported in UTC timezone in WTV metadata. TVDB and MovieDB dates are reported in network timezone. It is assumed the video is recorded and converted on the same machine, i.e. local timezone...MVVM Light Toolkit: MVVM Light Toolkit V4.1 for Visual Studio 2012: This version only supports Visual Studio 2012 (and all Express editions too). If you use Visual Studio 2010, please stay tuned, we will publish an update in a few days with support for VS10. V4.1 supports: Windows Phone 8 Windows 8 (Windows RT) Silverlight 5 Silverlight 4 WPF 4.5 WPF 4 WPF 3.5 And the following development environments: Visual Studio 2012 (Pro, Premium, Ultimate) Visual Studio 2012 Express for Windows 8 Visual Studio 2012 Express for Windows Phone 8 Visual...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.73: Fix issue in Discussion #401101 (unreferenced var in a for-in statement was getting removed). add the grouping operator to the parsed output so that unminified parsed code is closer to the original. Will still strip unneeded parens later, if minifying. more cleaning of references as they are minified out of the code.Liberty: v3.4.0.1 Release 28th October 2012: Change Log -Fixed -H4 Fixed the save verification screen showing incorrect mission and difficulty information for some saves -H4 Hopefully fixed the issue where progress did not save between missions and saves would not revert correctly -H3 Fixed crashes that occurred when trying to load player information -Proper exception dialogs will now show in place of crashesPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 7): This release is compatible with the version of the Smooth Streaming SDK released today (10/26). Release 1 of the player framework is expected to be available next week. IMPROVEMENTS & FIXESIMPORTANT: List of breaking changes from preview 6 Support for the latest smooth streaming SDK. Xaml only: Support for moving any of the UI elements outside the MediaPlayer (e.g. into the appbar). Note: Equivelent changes to the JS version due in coming week. Support for localizing all text used in t...Send multiple SMS via Way2SMS C#: SMS 1.1: Added support for 160by2Quick Launch: Quick Launch 1.0: A Lightweight and Fast Way to Manage and Launch Thousands of Tools and ApplicationsPress Win+Q and start to search and run. http://www.codeplex.com/Download?ProjectName=quicklaunch&DownloadId=523536Orchard Project: Orchard 1.6: Please read our release notes for Orchard 1.6: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Media Companion: Media Companion 3.507b: Once again, it has been some time since our release, and there have been a number changes since then. It is hoped that these changes will address some of the issues users have been experiencing, and of course, work continues! New Features: Added support for adding Home Movies. Option to sort Movies by votes. Added 'selectedBrowser' preference used when opening links in an external browser. Added option to fallback to getting runtime from the movie file if not available on IMDB. Added new Big...MSBuild Extension Pack: October 2012: Release Blog Post The MSBuild Extension Pack October 2012 release provides a collection of over 475 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...NAudio: NAudio 1.6: Release notes at http://mark-dot-net.blogspot.co.uk/2012/10/naudio-16-release-notes-10th.htmlPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.Umbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...New ProjectsAsbesto: Proyecto grupo: "2ravas" para Laboratorio de Computación IV de la carrera TSP de la UTN-FRGP Realizado en C# y ASP.NET en Visual Studio 2012 y SQL Server 2012C++ Diamond printer on console without if statement: prints just a diamond with stars(*). only one important and funny feature is that it's not using 'if' statement. C++ used.CloudTest: Test cloud computing projectEnumeration of objects Class Library C#: At core level in Java I like extension of traditional enum type permiting use as the elements instances of a class. The project implements the same idea by C# with some more advanced features like a set type with boolean operators extending idea of FlagsAttributeEquation Inversion: Visual Studion 2008 Add-in for equation inversions.ESENT Serialization Class Library: ESENT Serialization class library is built above Managed ESENT. It allows you to store your objects in the underlying extensible storage engine database.ForceHttps: Very simple httpModule to ensure that all http requests are https. 1. Drop the dll into your website's bin directory 2. Update httpModules section of your web.config to reference the dll 3. Make sure your Web Server can handle SSL requests for the URL ALL http requests will be redirected as https.forrajeriaPAV2: tp pav2 asd as asd as fasfjaijiasjifoa as asf asf as afs fas asf afs fsaGoogle Analytics for WinRT: A Windows Store (WinRT) class library for tracking Google Analytics page views, events, and custom variables for your Metro style desktop and tablet apps.huanglitest10312: mvc projectiMan: iMan will be your own port manager, which help to provide the information about all the ports open in your PC It's completely developed in .NET C#InjectionCop: Facilitates secure coding through static code analysis and contract annotations in the Common Language Infrastructure.Master of Olympus: Remake of Master of Olympus : Zeus, a game developed by Impressions Games in 2000.MediaScribe: MediaScribe is a note app for your media.MouseNestDotNet: A small personal website application.Project Estimator ???????? ?? ?????????????? ?? ???????: Project Estimator ? ???????? ??????????? ? ??????????? ?? ????? ? ???????? ????.Projekt grupowy 1: Project created only for school purpose.SecondHandApp: Warenwirtschaftsprogramm für denn SecondHand LädenSimple .NET XML Serialization: .NET's XML serialization using Linq XObject classes EG: XElement in an IXmlSerializable base classStrip Converter: A Very Handy Converter for 2D Games Developers Developer Website: http://www.bluebulk.infoSwitch JS Control (Client Side): Switch JS control is a very light client side library to create switch(es) on web applications or mobile applications.VisaSys: Visa System????????: TODO::????

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part II

    - by mbridge
    In previous post, I have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } } View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. - jquery-ui.js - jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • How to use Bonjour?

    - by Roman
    First, what exactly Bonjour does (pleas read my guesses written bellow)? Here I found out that Bonjour enables automatic discovery of computers, devices, and services on IP networks. But I thought that it not only "discovers devices on IP network" it also creates an IP network by assigning IP addresses to devices where Bonjour is running. Am I right? And I still miss the essence. Does it work in the following way? First I connect devices (for example laptops) physically so that they potentially can communicate with each other. Then, let say, on some laptops I have Bonjour running and then, as a consequence, these laptops assign IP addresses to them self in automatic way. So, laptops (where Bonjour is running) build an IP network. Does it work in this way? Or may be a computer running Bonjour is not considered as a service and it does not broadcast itself just because Bonjour is running on this computer. I mean that the applications running on the computers need to use Bonjour to broadcast themself. So, it is applications that broadcast themself (not computers) and it is not done automatically (application needs to broadcast themself explicitly). Is it right? How exactly my application can broadcast itself? Can I use command line to register an service (so that all applications using Bonjour knows that a new service appeared)? Further, I would like to have an application which use the IP network created by Bonjour. For that my application needs to know which devices/services are present in the network. In more details, my application needs to have a list of services. Each service in the list should have a name, the IP address where it is running and the port which is used by the application. Can Bonjour provide this information in some way? If it is the case, how exactly it works. How my program can get this information from Bonjour? Can my program read some file created by Bonjour and containing the above mentioned information? Can I use some commands in command line to retrieve this information? I have a special interest in accessing the information about services from files, environment variables or commands in command line. These options seems to me to be the simplest! Since in these case I do not need to use any additional libraries to communicate with Bonjour from a particular programming language. P.S. Pleas ask questions if something is not clear in my question. I will try to formulate my question in a more clear way. P.P.S. I use Windows 7. ADDED: I plan to write my applications in PHP. Every computer should run a Apache web server. And I want to use Bonjour to help computer discover each other (computers are working in a local network).

    Read the article

  • Apache logs 200 instead of 404

    - by elle
    I've been getting the following in my apache access log: "GET /work//?module=www&section=working=../../../../../../../../../../../../../../../../../../../../../../../..//proc/self/environ%0000 HTTP/1.1" 200 5187 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" If I try the URL, I get a 404 instead of 200 which the above request received. Is there a way I can confirm that the 200 was real and not spoofed? Where is the long info on the client coming from?

    Read the article

  • Users loggin to 3Com switches authenticated by radius not getting admin priv and no access available

    - by 3D1L
    Hi, Following the setup that I have for my Cisco devices, I got some basic level of functionality authenticating users that loggin to 3Com switches authenticated against a RADIUS server. Problem is that I can not get the user to obtain admin privileges. I'm using Microsoft's IAS service. According to 3Com documentation when configuring the access policy on IAS the value of 010600000003 have to be used to specify admin access level. That value have to be input in the Dial-in profile section: 010600000003 - indicates admin privileges 010600000002 - manager 010600000001 - monitor 010600000000 - visitor Here is the configuration on the switch: radius scheme system server-type standard primary authentication XXX.XXX.XXX.XXX accounting optional key authentication XXXXXX key accounting XXXXXX domain system scheme radius-scheme system local-user admin service-type ssh telnet terminal level 3 local-user manager service-type ssh telnet terminal level 2 local-user monitor service-type ssh telnet terminal level 1 The configuration is working with the IAS server because I can check user login events with the Eventviewer tool. Here is the output of the DISPLAY RADIUS command at the switch: [4500]disp radius SchemeName =system Index=0 Type=standard Primary Auth IP =XXX.XXX.XXX.XXX Port=1645 State=active Primary Acct IP =127.0.0.1 Port=1646 State=active Second Auth IP =0.0.0.0 Port=1812 State=block Second Acct IP =0.0.0.0 Port=1813 State=block Auth Server Encryption Key= XXXXXX Acct Server Encryption Key= XXXXXX Accounting method = optional TimeOutValue(in second)=3 RetryTimes=3 RealtimeACCT(in minute)=12 Permitted send realtime PKT failed counts =5 Retry sending times of noresponse acct-stop-PKT =500 Quiet-interval(min) =5 Username format =without-domain Data flow unit =Byte Packet unit =1 Total 1 RADIUS scheme(s). 1 listed Here is the output of the DISPLAY DOMAIN and DISPLAY CONNECTION commands after users log into the switch: [4500]display domain 0 Domain = system State = Active RADIUS Scheme = system Access-limit = Disable Domain User Template: Idle-cut = Disable Self-service = Disable Messenger Time = Disable Default Domain Name: system Total 1 domain(s).1 listed. [4500]display connection Index=0 ,Username=admin@system IP=0.0.0.0 Index=2 ,Username=user@system IP=xxx.xxx.xxx.xxx On Unit 1:Total 2 connections matched, 2 listed. Total 2 connections matched, 2 listed. [4500] Here is the DISP RADIUS STATISTICS: [4500] %Apr 2 00:23:39:957 2000 4500 SHELL/5/LOGIN:- 1 - ecajigas(xxx.xxx.xxx.xxx) in un it1 logindisp radius stat state statistic(total=1048): DEAD=1046 AuthProc=0 AuthSucc=0 AcctStart=0 RLTSend=0 RLTWait=2 AcctStop=0 OnLine=2 Stop=0 StateErr=0 Received and Sent packets statistic: Unit 1........................................ Sent PKT total :4 Received PKT total:1 Resend Times Resend total 1 1 2 1 Total 2 RADIUS received packets statistic: Code= 2,Num=1 ,Err=0 Code= 3,Num=0 ,Err=0 Code= 5,Num=0 ,Err=0 Code=11,Num=0 ,Err=0 Running statistic: RADIUS received messages statistic: Normal auth request , Num=1 , Err=0 , Succ=1 EAP auth request , Num=0 , Err=0 , Succ=0 Account request , Num=1 , Err=0 , Succ=1 Account off request , Num=0 , Err=0 , Succ=0 PKT auth timeout , Num=0 , Err=0 , Succ=0 PKT acct_timeout , Num=3 , Err=1 , Succ=2 Realtime Account timer , Num=0 , Err=0 , Succ=0 PKT response , Num=1 , Err=0 , Succ=1 EAP reauth_request , Num=0 , Err=0 , Succ=0 PORTAL access , Num=0 , Err=0 , Succ=0 Update ack , Num=0 , Err=0 , Succ=0 PORTAL access ack , Num=0 , Err=0 , Succ=0 Session ctrl pkt , Num=0 , Err=0 , Succ=0 RADIUS sent messages statistic: Auth accept , Num=0 Auth reject , Num=0 EAP auth replying , Num=0 Account success , Num=0 Account failure , Num=0 Cut req , Num=0 RecError_MSG_sum:0 SndMSG_Fail_sum :0 Timer_Err :0 Alloc_Mem_Err :0 State Mismatch :0 Other_Error :0 No-response-acct-stop packet =0 Discarded No-response-acct-stop packet for buffer overflow =0 The other problem is that when the RADIUS server is not available I can not log in to the switch. The switch have 3 local accounts but none of them works. How can I specify the switch to use the local accounts in case that the RADIUS service is not available?

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Can I get advice on my nginx configuration (as a proxy in front of Jira and Confluence)?

    - by Nate
    I was wondering if I could get some advice on my nginx configuration. The config seems to be working, but I'm unsure if I'm doing everything properly. The basic idea is to have a Jira and Confluence server (in separate Tomcat instances) running on the same machine, with nginx in front to handle SSL for both. I want only SSL connections to be made to Jira/Confluence. Jira is running on 127.0.0.1:9090 and Confluence on 127.0.0.1:8080. Here is my nginx.conf, any advice or tips would be greatly appreciated. user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory include /etc/nginx/conf.d/*.conf; # Our self-signed cert ssl_certificate /etc/ssl/certs/fissl.crt; ssl_certificate_key /etc/ssl/private/fissl.key; # redirect non-ssl Confluence to ssl server { listen 80; server_name confluence.example.com; rewrite ^(.*) https://confluence.example.com$1 permanent; } # redirect non-ssl Jira to ssl server { listen 80; server_name jira.example.com; rewrite ^(.*) https://jira.example.com$1 permanent; } # # The Confluence server # server { listen 443; server_name confluence.example.com; ssl on; access_log /var/log/nginx/confluence.access.log main; error_log /var/log/nginx/confluence.error.log; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } # # The Jira server # server { listen 443; server_name jira.example.com; ssl on; access_log /var/log/nginx/jira.access.log main; error_log /var/log/nginx/jira.error.log; location / { proxy_pass http://127.0.0.1:9090/; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } }

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Is it possible to write C# code as below and send email using network in different country?

    - by kedar karthik
    Is it possible to write C# code as below and send email using mnetwork in different country? MSExchangeWebServiceURL = mail.something.com/ews/exchange.asmx its a web service URL ... sorry to correct my self //....this works great when i run the same code from home network, my friends home network ... anywhere around ... but when i run it from my clients location in columbia ... it fails I have a valid user name and password on that exchange server. Is there any configuration that I can set to achieve this? BTW this code below works when I run it within office network and any network within any home network ... i have tried atleast 5 friends network in Plano, Texas. I want this code to work when run from any network in another country. My client in columbia can connect to web service using a browser .. use the same user name and password ..... but when i run the code above ... it is not able to connect to our web service .... String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • How to setup stunnel so that gmail can use my own smtp server to send messages.

    - by igorhvr
    I am trying to setup gmail to send messages using my own smtp server. I am doing this by using stunnel over a non-ssl enabled server. I am able to use my own smtp client with ssl enabled just fine to my server. Unfortunately, however, gmail seems to be unable to connect to my stunnel port. Gmail seems to be simply closing the connection right after it is established - I get a "SSL socket closed on SSL_read" on my server logs. On gmail, I get a "We are having trouble authenticating with your other mail service. Please try changing your SSL settings. If you continue to experience difficulties, please contact your other email provider for further instructions." message. Any help / tips on figuring this out will be appreciated. My certificate is self-signed - could this perhaps be related to the problem I am experiencing? I pasted the entire SSL session (logs from my server) below. 2011.01.02 16:56:20 LOG7[20897:3082491584]: Service ssmtp accepted FD=0 from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp started 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=0 in non-blocking mode 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on local socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Waiting for a libwrap process 2011.01.02 16:56:20 LOG7[20897:3082267504]: Acquired libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Releasing libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Released libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp permitted by libwrap from 209.85.210.171:46858 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp accepted connection from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=1 in non-blocking mode 2011.01.02 16:56:20 LOG6[20897:3082267504]: connect_blocking: connecting 127.0.0.1:25 2011.01.02 16:56:20 LOG7[20897:3082267504]: connect_blocking: s_poll_wait 127.0.0.1:25: waiting 10 seconds 2011.01.02 16:56:20 LOG5[20897:3082267504]: connect_blocking: connected 127.0.0.1:25 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp connected remote server from 127.0.0.1:3701 2011.01.02 16:56:20 LOG7[20897:3082267504]: Remote FD=1 initialized 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on remote socket 2011.01.02 16:56:20 LOG5[20897:3082267504]: Negotiations for smtp (server side) started 2011.01.02 16:56:20 LOG7[20897:3082267504]: RFC 2487 not detected 2011.01.02 16:56:20 LOG5[20897:3082267504]: Protocol negotiations succeeded 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): before/accept initialization 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write server hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate request A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=2, /C=US/O=Equifax/OU=Equifax Secure Certificate Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=1, /C=US/O=Google Inc/CN=Google Internet Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=0, /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client key exchange A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read certificate verify A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write change cipher spec A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 items in the session cache 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects (SSL_connect()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects (SSL_accept()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 server renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 external session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache misses 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache timeouts 2011.01.02 16:56:20 LOG6[20897:3082267504]: SSL accepted: new session negotiated 2011.01.02 16:56:20 LOG6[20897:3082267504]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL socket closed on SSL_read 2011.01.02 16:56:20 LOG7[20897:3082267504]: Socket write shutdown 2011.01.02 16:56:20 LOG5[20897:3082267504]: Connection closed: 167 bytes sent to SSL, 37 bytes sent to socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp finished (0 left)

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • Windows 7, HTTPS WebDav: Asks for password twice and fails. Any workarounds?

    - by AutoDMC
    Howdy. I have a Dav server running with PHP SabreDav (code.google.com/p/sabredav/wiki/Windows) on Cherokee at an HTTPS secured URL. It's set to use https, and uses Digest Authentication. I can log in with multiple browsers and a few third party clients (BitKinex and Java AnyClient can connect and browse as well, caveats below). However, when attempting to log in with Windows 7 (surprise, surprise), it asks for my password twice, then tells me that my folder is invalid. I have verified that the server is using Digest authentication. I've verified multiple times that third party software can connect. I even went out and bought a GoDaddy SSL certificate so my SSL wouldn't be self signed anymore. I've applied the registry hacks here: support.microsoft.com/kb/943280 (Note that the article says the "fix" already exists for Windows 7, I just need magical registry hax to get it to work) I've applied the registry hacks here: support.microsoft.com/kb/941050 I've applied the registry hacks here: support.microsoft.com/kb/841215 (Supposedly allows Basic Auth, which shouldn't apply, but why not?) All to no avail; Windows continues to ask for my password twice, then state that "The folder you entered does not appear to be valid. Please choose another." Try the command line? Sure: I've attempted to access with NET USE "https://dav.example.com/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/" (System error 1790) I've attempted to access with NET USE "https://dav.example.com/subdir/" password /USER:me (System error 59) I've attempted to access with NET USE "https://dav.example.com/subdir/" (System error 1790) For good luck: ping dav.example.com ... works. And again, web browsers can access the share just fine, so can third party tools. Best I can tell at this point is "HAHA, NO WEBDAV FOR YOU ON WINDOWS 7" which would be fine except everyone who will be using this application... uses Windows 7. And most are not as persistent or pugnacious as I am. I feel like I've burned through every random suggestion I've found anywhere in the first 10 pages of Google on every search term I can think of. Any ideas? I need it to be Webdav, I need it to be over HTTPS, and I really do need a method to access it from Windows 7. EXTRA DETAIL: However, the "third party" programs I've tried have either been buggy, incomplete, or have silly ... "glitches." For example, BitKinex seems to fixate on any http error codes sent, so if there's a glitch reading a directory, BAM, that directory is always listed empty. Long directory listings also show up as blank, even though the transfer panel shows that directory listing is still taking place. In any case, BitKinex is useless for development purposes for the reasons above. And besides, I'm building this for people other than myself, people who will want to get this dav share working "the regular way."

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • Can't configure frame relay T1 on Cisco 1760

    - by sonar
    For the past few days I've been trying to configure a data T1 via a Frame Relay. Now I've been pretty unsuccessful at it, and it's been a while, since I've done this so please bare with me. The ISP provided me the following information: 1. IP address 2. Gateway address 3. Encapsulation Frame Relay 4. DLCI 100 5. BZ8 ESF (I think the bz8 was supposed to be b8zs) 6. Time Slot (1 al 24). And what I have configured up until now is the following: interface Serial0/0 ip address <ip address> 255.255.255.252 encapsulation frame-relay service-module t1 timeslots 1-24 frame-relay interface-dlci 100 sh service-module s0/0 (outputs): Module type is T1/fractional Hardware revision is 0.128, Software revision is 0.2, Image checksum is 0x73D70058, Protocol revision is 0.1 Receiver has no alarms. Framing is **ESF**, Line Code is **B8ZS**, Current clock source is line, Fraction has **24 timeslots** (64 Kbits/sec each), Net bandwidth is 1536 Kbits/sec. Last module self-test (done at startup): Passed Last clearing of alarm counters 00:17:17 loss of signal : 0, loss of frame : 0, AIS alarm : 0, Remote alarm : 2, last occurred 00:10:10 Module access errors : 0, Total Data (last 1 15 minute intervals): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs Data in current interval (138 seconds elapsed): 0 Line Code Violations, 0 Path Code Violations 0 Slip Secs, 0 Fr Loss Secs, 0 Line Err Secs, 0 Degraded Mins 0 Errored Secs, 0 Bursty Err Secs, 0 Severely Err Secs, 0 Unavail Secs sh int: FastEthernet0/0 is up, line protocol is up Hardware is PQUICC_FEC, address is 000d.6516.e5aa (bia 000d.6516.e5aa) Internet address is 10.0.0.1/24 MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:20:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog 0 input packets with dribble condition detected 191 packets output, 20676 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Serial0/0 is up, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY, loopback not set Keepalive set (10 sec) LMI enq sent 157, LMI stat recvd 0, LMI upd recvd 0, DTE LMI down LMI enq recvd 23, LMI stat sent 0, LMI upd sent 0 LMI DLCI 1023 LMI type is CISCO frame relay DTE FR SVC disabled, LAPF state down Broadcast queue 0/64, broadcasts sent/dropped 2/0, interface broadcasts 0 Last input 00:24:51, output 00:00:05, output hang never Last clearing of "show interface" counters 00:27:20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) Available Bandwidth 1152 kilobits/sec 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 23 packets input, 302 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 1725 input errors, 595 CRC, 1099 frame, 0 overrun, 0 ignored, 30 abort 246 packets output, 3974 bytes, 0 underruns 0 output errors, 0 collisions, 48 interface resets 0 output buffer failures, 0 output buffers swapped out 4 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up Serial0/0.1 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never Serial0/0.100 is down, line protocol is down Hardware is PQUICC with Fractional T1 CSU/DSU Internet address is <ip address>/30 MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation FRAME-RELAY Last clearing of "show interface" counters never And everything seems to be accounted for to me, but apparently I'm missing something. My issue is that I'm stuck on interface up, line protocol down, so the T1 doesn't go up. Any ideas? Thank you,

    Read the article

  • How do I connect my Windows XP laptop to the internet?

    - by rubysiddhi
    Hello fellow super users, The Past I have a Acer Travelmate 2300 laptop running Windows XP. 6 months ago I moved into a new apartment and got a new internet connection set up. After getting an internet connection installed in my apartment I reinstalled Windows XP and at the same time wiped my drive clean losing all the original Acer software and drivers. Once XP was reinstalled I had to find all the drivers again to get the Travelmate laptop connected to the internet. So, using my Vista laptop which was connected fine, I went to the Acer Travelmate Series drivers download page to download the necessary drivers. I transferred them to my Acer XP machine and installed them the best I could (there were no easy instructions so I just had to find all the executables and run them). I eventually got connected to the internet but not exactly in the way I had hoped for. The Present To be connected to the internet I need to have an Ethernet cord connecting my computer (via the Ethernet port) to my router. This is a problem since it defeats the purpose of having a Wireless LAN card in my Acer laptop. One of the programs I downloaded from the Acer Travelmate Series page was the Acer Wireless LAN Configuration Utility. This program allows me to see the current network I am connected to and all the available networks I could potentially connect to. It reminds me of XP's Wireless Network Connection window/utility where you can see all available wireless networks, refresh the network list and connect to one of the networks. I should mention that my ISP set up a security enabled wireless network with WPA. This network requires a network key if you want to connect to it. I guess my Vista computer has the network key entered into it already. The problem is that I do not know what the network key is. Now obviously you would say just contact my ISP to get the key. And I will but there is just one extra weird issue. I am able to connect to another unsecured wireless network in the Wireless Network Connection window/utility. I can be on it as long as my Ethernet cable is plugged in. So this is not really wireless is it? And this indicates that even if I do get that network key password from my ISP, I will only solve one of the two problems I have. I will only solve being able to get online as long as I am connected to my router via the Ethernet cable. The Main Questions So how do I enable my acer IPN2220 Wireless LAN Card so that I can use my Acer laptop from anywhere with in my apartment? Or should I first get the network key from my ISP to access my security enabled wireless network? And then deal with getting the acer IPN2220 Wireless LAN Card working? Hard & Learned VS Easy & Stupid Of course contacting the ISP would be easier. Have em just come in here and do there thing. The problem with that is that they do not speak English (yeah, im in Poland) and it'd be a hell of a time trying to understand what they are doing (uncomfortable looking over their shoulder). Also, I want to learn how to do this task myself so that I can fix the problem if it ever happens again. You know, be more self sufficient. I look forward to helpful replies. Thanks, Xaviour

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 1

    - by shiju
     A while back, I have blogged NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2 on how to use MongoDB with an ASP.NET MVC application. The NoSQL movement is getting big attention and RavenDB is the latest addition to the NoSQL and document database world. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed  by Ayende Rahien.  Raven stores schema-less JSON documents, allow you to define indexes using Linq queries and focus on low latency and high performance. RavenDB is .NET focused document database which comes with a fully functional .NET client API  and supports LINQ. RavenDB comes with two components, a server and a client API. RavenDB is a REST based system, so you can write your own HTTP cleint API. As a .NET developer, RavenDB is becoming my favorite document database. Unlike other document databases, RavenDB is supports transactions using System.Transactions. Also it's supports both embedded and server mode of database. You can access RavenDB site at http://ravendb.netA demo App with ASP.NET MVCLet's create a simple demo app with RavenDB and ASP.NET MVC. To work with RavenDB, do the following steps. Go to http://ravendb.net/download and download the latest build.Unzip the downloaded file.Go to the /Server directory and run the RavenDB.exe. This will start the RavenDB server listening on localhost:8080You can change the port of RavenDB  by modifying the "Raven/Port" appSetting value in the RavenDB.exe.config file.When running the RavenDB, it will automatically create a database in the /Data directory. You can change the directory name data by modifying "Raven/DataDirt" appSetting value in the RavenDB.exe.config file.RavenDB provides a browser based admin tool. When the Raven server is running, You can be access the browser based admin tool and view and edit documents and index using your browser admin tool. The web admin tool available at http://localhost:8080The below is the some screen shots of web admin tool     Working with ASP.NET MVC  To working with RavenDB in our demo ASP.NET MVC application, do the following steps Step 1 - Add reference to Raven Cleint API In our ASP.NET MVC application, Add a reference to the Raven.Client.Lightweight.dll from the Client directory. Step 2 - Create DocumentStoreThe document store would be created once per application. Let's create a DocumentStore on application start-up in the Global.asax.cs. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); The above code will create a Raven DB document store and will be listening the server locahost at port 8080    Step 3 - Create DocumentSession on BeginRequest   Let's create a DocumentSession on BeginRequest event in the Global.asax.cs. We are using the document session for every unit of work. In our demo app, every HTTP request would be a single Unit of Work (UoW). BeginRequest += (sender, args) =>   HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); Step 4 - Destroy the DocumentSession on EndRequest  EndRequest += (o, eventArgs) => {     var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable;     if (disposable != null)         disposable.Dispose(); };  At the end of HTTP request, we are destroying the DocumentSession  object.The below  code block shown all the code in the Global.asax.cs  private const string RavenSessionKey = "RavenMVC.Session"; private static DocumentStore documentStore;   protected void Application_Start() { //Create a DocumentStore in Application_Start //DocumentStore should be created once per application and stored as a singleton. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //DI using Unity 2.0 ConfigureUnity(); }   public MvcApplication() { //Create a DocumentSession on BeginRequest   //create a document session for every unit of work BeginRequest += (sender, args) =>     HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); //Destroy the DocumentSession on EndRequest EndRequest += (o, eventArgs) => { var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable; if (disposable != null) disposable.Dispose(); }; }   //Getting the current DocumentSession public static IDocumentSession CurrentSession {   get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  We have setup all necessary code in the Global.asax.cs for working with RavenDB. For our demo app, Let’s write a domain class  public class Category {       public string Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }   } We have created simple domain entity Category. Let's create repository class for performing CRUD operations against our domain entity Category.  public interface ICategoryRepository {     Category Load(string id);     IEnumerable<Category> GetCategories();     void Save(Category category);     void Delete(string id);       }    public class CategoryRepository : ICategoryRepository {     private IDocumentSession session;     public CategoryRepository()     {             session = MvcApplication.CurrentSession;     }     //Load category based on Id     public Category Load(string id)     {         return session.Load<Category>(id);     }     //Get all categories     public IEnumerable<Category> GetCategories()     {         var categories= session.LuceneQuery<Category>()                 .WaitForNonStaleResults()             .ToArray();         return categories;       }     //Insert/Update category     public void Save(Category category)     {         if (string.IsNullOrEmpty(category.Id))         {             //insert new record             session.Store(category);         }         else         {             //edit record             var categoryToEdit = Load(category.Id);             categoryToEdit.Name = category.Name;             categoryToEdit.Description = category.Description;         }         //save the document session         session.SaveChanges();     }     //delete a category     public void Delete(string id)     {         var category = Load(id);         session.Delete<Category>(category);         session.SaveChanges();     }        } For every CRUD operations, we are taking the current document session object from HttpContext object. session = MvcApplication.CurrentSession; We are calling the static method CurrentSession from the Global.asax.cs public static IDocumentSession CurrentSession {     get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  Retrieve Entities  The Load method get the single Category object based on the Id. RavenDB is working based on the REST principles and the Id would be like categories/1. The Id would be created by automatically when a new object is inserted to the document store. The REST uri categories/1 represents a single category object with Id representation of 1.   public Category Load(string id) {    return session.Load<Category>(id); } The GetCategories method returns all the categories calling the session.LuceneQuery method. RavenDB is using a lucen query syntax for querying. I will explain more details about querying and indexing in my future posts.   public IEnumerable<Category> GetCategories() {     var categories= session.LuceneQuery<Category>()             .WaitForNonStaleResults()         .ToArray();     return categories;   } Insert/Update entityFor insert/Update a Category entity, we have created Save method in repository class. If  the Id property of Category is null, we call Store method of Documentsession for insert a new record. For editing a existing record, we load the Category object and assign the values to the loaded Category object. The session.SaveChanges() will save the changes to document store.  //Insert/Update category public void Save(Category category) {     if (string.IsNullOrEmpty(category.Id))     {         //insert new record         session.Store(category);     }     else     {         //edit record         var categoryToEdit = Load(category.Id);         categoryToEdit.Name = category.Name;         categoryToEdit.Description = category.Description;     }     //save the document session     session.SaveChanges(); }  Delete Entity  In the Delete method, we call the document session's delete method and call the SaveChanges method to reflect changes in the document store.  public void Delete(string id) {     var category = Load(id);     session.Delete<Category>(category);     session.SaveChanges(); }  Let’s create ASP.NET MVC controller and controller actions for handling CRUD operations for the domain class Category  public class CategoryController : Controller { private ICategoryRepository categoyRepository; //DI enabled constructor public CategoryController(ICategoryRepository categoyRepository) {     this.categoyRepository = categoyRepository; } public ActionResult Index() {         var categories = categoyRepository.GetCategories();     if (categories == null)         return RedirectToAction("Create");     return View(categories); }   [HttpGet] public ActionResult Edit(string id) {     var category = categoyRepository.Load(id);         return View("Save",category); } // GET: /Category/Create [HttpGet] public ActionResult Create() {     var category = new Category();     return View("Save", category); } [HttpPost] public ActionResult Save(Category category) {     if (!ModelState.IsValid)     {         return View("Save", category);     }           categoyRepository.Save(category);         return RedirectToAction("Index");     }        [HttpPost] public ActionResult Delete(string id) {     categoyRepository.Delete(id);     var categories = categoyRepository.GetCategories();     return PartialView("CategoryList", categories);      }        }  RavenDB is an awesome document database and I hope that it will be the winner in .NET space of document database world.  The source code of demo application available at http://ravenmvc.codeplex.com/

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >