Search Results

Search found 22177 results on 888 pages for 'dell studio xps 16'.

Page 170/888 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • jQuery + Perl CGI to vb.net transition

    - by user1257458
    I've been developing oracle database-heavy "web applications" forever by building my html by hand, adding some jquery to handle ajax requests (html inserts for forms processing etc), and always did my server side stuff in perl cgi. I really love how easy it is to read some form input, execute some select statements through dbi (SO EASY), and generate HTML to be inserted by the jquery request. That's a web application to me. However, my new boss builds everything in visual studio 2010, vb.net, usually webforms. So, for work reasons, I now need to start developing in vb.net so it can be collectively maintained, and I'm just seeking advice on where to start learning/how to approach this. I know I could at least learn ASP.net and VB.net, and create a webform, have it read parameters, return HTML, etc. which would allow me to use my previously written HTML and client-side scripts (jQuery). Although- since we're moving heavily to mobile applications I really need to reduce client-side processing load. Is there any advantage to my boss' method? Thanks a ton.

    Read the article

  • Using Sandcastle to build code contracts documentation

    - by DigiMortal
    In my last posting about code contracts I showed how code contracts are documented in XML-documents. In this posting I will show you how to get code contracts documented with Sandcastle and Sandcastle Help File Builder. Before we start, let’s download Sandcastle tools we need: Sandcastle Sandcastle Help File Builder Install Sandcastle first and then Sandcastle Help File Builder. Because we are generating only HTML based documentation we upload to server we don’t need any other tools. Of course, we need Cassini or IIS, but I expect it to be already there in your machine. Open your project and turn on XML-documentation for project and contracts. Now let’s run Sandcastle Help File Builder. We have to create new project and add our Visual Studio solution to this project. Now set the HelpFileFormat parameter value to be Website and let builder build the help. You have to wait about two or three minutes until help is ready. Take a look at your documentation that Sandcastle generated – you see not much information there about code contracts and their rules. Enabling code contracts documentation Now let’s include code contracts to documentation. Follow these steps: Open Sandcastle folder and make copy of vs2005 folder. Open CodeContracts folder (c:\program files\microsoft\contracts\) and unzip the archive from sandcastle folder. Copy all unzipped files to Sandcastle folder. Create (yes, create new) and build your Sandcastle Help File Builder documentation project again. Open help. In my case I see something like this now. As you can see then contracts are documented pretty well. We can easily turn on code contracts XML-documentation generation and all our contracts are documented automatically. To get documentation work we had to use Sandcastle help file fixes that are installed with code contracts and if we had previously Sandcastle Help File Builder project we had to create it from start to get new rules accepted. Once the documentation support for contracts works we have to do nothing more to get contracts documented.

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • How do i rotate a 2D picturebox according to my mouse

    - by Gwenda.T
    I would want to rotate my picturebox which contains an image. The image will just spin around following the mouse, but the position of the image is fixed. Any idea on how it should be done? Btw using Visual Studio 2012 C# Windows phone application for Windows Phone 8. I've did a little research on google but the other codes were from VS2012 using a WinForm But it's different now I'm not able to use their code. So I was hoping I could find some answer at here! Currently now I have this private void arrowHead_Tap(object sender, System.Windows.Input.GestureEventArgs e) { Duration Time_duration = new Duration(TimeSpan.FromSeconds(0.5)); Storyboard MyStory = new Storyboard(); MyStory.Duration = Time_duration; DoubleAnimation My_Double = new DoubleAnimation(); My_Double.Duration = Time_duration; MyStory.Children.Add(My_Double); RotateTransform MyTransform = new RotateTransform(); Storyboard.SetTarget(My_Double, MyTransform); Storyboard.SetTargetProperty(My_Double, new PropertyPath("Angle")); My_Double.To = 15; arrowHead.RenderTransform = MyTransform; arrowHead.RenderTransformOrigin = new Point(0.5, 0.5); //stackPanel1.Children.Add(image1); MyStory.Begin(); } This is my only way of tilting the image however now I want it to follow my mouse! Thanks!

    Read the article

  • Blend for Visual Studio 2013 Prototyping Applications with SketchFlow

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/10/blend-for-visual-studio-2013-prototyping-applications-with-sketchflow.aspxSketchFlow enables rapid creating of dynamic interface mockups very quickly. The SketchFlow workspace is the same as the standard Blend workspace with the inclusion of three panels: the SketchFlow Feedback panel, the SketchFlow Animation panel and the SketchFlow Map panel. By using SketchFlow to prototype, you can get feedback early in the process. It helps to surface possible issues, lower development iterations, and increase stakeholder buy in. SketchFlow prototypes not only provide an initial look but also provide a way to add additional ideas and input and make sure the team is on track prior to investing in complete development. When you have completed the prototyping, you can discard the prototype and just use the lessons learned to design the application from or extract individual elements from your prototype and include them in the application. I don’t recommend trying to transition the entire project into a development project. Objects that you add with the SketchFlow style have a hand-sketched look. The sketch style is used to remind stakeholders that this is a prototype. This encourages them to focus on the flow and functionality without getting distracted by design details. The sketchflow assets are under sketchflow in the asset panel and are identifiable by the postfix “–Sketch”. For example “Button-Sketch”. You can mix sketch and standard controls in your interface, if required. Be creative, if there is a missing control or your interface has a different look and feel than the out of the box one, reuse other sketch controls to mimic the functionality or look and feel. Only use standard controls if it doesn’t distract from the idea that this is a prototype and not a standard application. The SketchFlow Map panel provides information about the structure of your application. To create a new screen in your prototype: Right-click the map surface and choose “Create a Connected Screen”. Name the screens with names that are meaningful to the stakeholders. The start screen is the one that has the green arrow. To change the start screen, right click on any other screen and set to start screen. Only one screen can be the start screen at a time. Rounded screen are component screens to mimic reusable custom controls that will be built into the final application. You can change the colors of all of the boxes and should use colors to create functional groupings. The groupings can be identified in the SketchFlow Project Settings. To add connections between screens in the SketchFlow Map panel. Move the mouse over a screen in the SketchFlow and a menu will appear at the bottom of the screen node. In the menu, click Connect to an existing screen. Drag the arrow to another screen on the Map. You add navigation to your prototype by adding connections on the SketchFlow map or by adding navigation directly to items on your interface. To add navigation from objects on the artboard, right click the item then from the menu, choose “Navigate to”. This will expose a sub-menu with available screens, backward, or forward. When the map has connected screens, the SketchFlow Player displays the connected screens on the Navigate sidebar. All screens show in the SketchFlow Player Map. To see the SketchFlow Player, run your SketchFlow prototype. The Navigation sidebar is meant to show the desired user work flow. The map can be used to view the different screens regardless of suggested navigation in the navigation bar. The map is able to be hidden and shown. As mentioned, a component screen is a shared screen that is used in more than one screen and generally represents what will be a custom object in the application. To create a component screen, you can create a screen, right click on it in the SketchFlow Map and choose “Make into component screen”. You can mouse over a screen and from the menu that appears underneath, choose create and insert component screen. To use an existing screen, select if from the Asset panel under SketchFlow, Components. You can use Storyboards and Visual State animations in your SketchFlow project. However, SketchFlow also offers its own animation technique that is simpler and better suited for prototyping. The SketchFlow Animation panel is above your artboard by default. In SketchFlow animation, you create frames and then position the elements on your interface for each frame. You then specify elapsed time and any effects you want to apply to the transition. The + at the top is what creates new frames. Once you have a new Frame, select it and change the property you want to animate. In the example above, I changed the Text of the result box. You can adjust the time between frames in the lower area between the frames. The easing and effects functions are changed in the center between each frame. You edit the hold time for frames by clicking the clock icon in the lower left and the hold time will appear on each frame and can be edited. The FluidLayout icon (also located in the lower left) will create smooth transitions. Next to the FluidLayout icon is the name of that Animation. You can rename the animation by clicking on it and editing the name. The down arrow chevrons next to the name allow you to view the list of all animations in this prototype and select them for editing. To add the animation to the interface object (such as a button to start the animation), select the PlaySketchFlowAnimationAction from the SketchFlow behaviors in the Assets menu and drag it to an object on your interface. With the PlaySketchFlowAnimationAction that you just added selected in the Objects and Timeline, edit the properties to change the EventName to the event you want and choose the SketchFlowAnimation you want from the drop down list. You may want to add additional information to your screens that isn’t really part of the prototype but is relevant information or a request for clarification or feedback from the reviewer. You do this with annotations or notes. Both appear on the user interface, however, annotations can be switched on or off at design and review time. Notes cannot be switched off. To add an Annotation, chose the Create Annotation from the Tools menu. The annotation appears on the UI where you will add the notes. To display or Hide annotations, click the annotation toggle at the bottom right on the artboard . After to toggle annotations on, the identifier of the person who created them appears on the artboard and you must click that to expand the notes. To add a note to the artboard, simply select the Note-Sketch from Assets ->SketchFlow ->Styles ->Sketch Styles. Drag and drop it to the artboard and place where you want it. When you are ready for users to review the prototype, you have a few options available. Click File -> Export and choose one of the options from the list: Publish to Sharepoint, Package SketchFlowProject, Export to Microsoft Word, or Export as Images. I suggest you play with as many of the options as you can to see what they do. Both the Sharepoint and Packaged SketchFlowProject allow you to collect feedback from one or more users that you can import into the project. The user can make notes on the UI and in the Feedback area in the bottom left corner of the player. When the user is done adding feedback, it is exported from the right most folder icon in the My Feedback panel. Feeback is imported on a panel named SketchFlow Feedback. To get that panel to show up, select Window -> SketchFlow Feedback. Once you have the panel showing, click the + in the upper right of the panel and find the notes you exported. When imported, they will show up in a list and on the artboard. To document your prototype, use the Export to Microsoft Word option from the File menu. That should get you started with prototyping.

    Read the article

  • The file STDOLE2.TLB cannot be found or contains a Visual Basic for Applications library that is not

    - by Jim Birchall
    In Microsoft Project 2007 Professional, If I select Tools Macro Security the message: The file "STDOLE.TLB" cannot be found or contains a Visual Basic for Applications library that is not valid. Verify that the file name is correct, and try again. If the Visual Basic for Applications library is invalid, reinstall Project. I am a software developer and I have developed an Add-In for MS Project using Visual Studio 2005, with all the problems that entails. As such my machine is configured with both Project 2003 Professional and Project 2007 Professional. I only noticed this error when trying to debug my Add-In. The Add-In loads and draws the menu, but when I click on the menu option I receive this error (which is also generated by the Tools Macros Security option mentioned above). I have tried repairing office installations and uninstalling everything and then re-installing everything all over again, but after several hours I still get the same problem. Does anyone have any idea how to resolve this? Some method of finding out what type libraries are registered with STDOLE2.TLB may help if I can identify what is causing the problem. Also a way of manually unregistering the nasty library may be helpful. My machine is configured as follows: Windows 7 Ultimate x64 Project 2003 Professional Office 2007 Ultimate Project 2007 Professional Visio 2007 Professional Visual Studio 2005 Team Edition for Developers Visual Studio 2005 Tools for Office Second edition Visual Studio 2008 Professional I also installed MS Project 2010 Beta to test my Add-In against, but have since uninstalled it. I have a suspicion that this may have caused the problem, but I cannot be sure.

    Read the article

  • Will this increase my Virtual private Server failing rate ?

    - by Spencer Lim
    Will this increase my Virtual private Server failing rate if i :- install Microsoft Window Server 2008 Enterprise install SQL server enterprise 2008 install IIS 7.5 install ASP.Net Mvc 2 install Microsoft Exchange << should live inside MWS2008 ? or standalone without OS? install Team foundation server << should live inside MWS2008 ? or standalone without OS? on one mini VPS with specification of DELL Poweredge R710 shared plan DDR3 ECC RAMs 16GB and -- 1GB for this VPS using DELL PERC 6i raid controller (this thing alone about 1.5k-2k) and the SAS HDD (15K RPM) (146GB) -- 33GB to this VPS each hdd is freaking fast over 300MB read / write possible with proper tuning the motherboard is a DELL and it has twin redundant PSU (870watt 85%eff) its running on Intel Xeon 5502 (Quad Core) x2 so about 8 physical proc (fairly share) is there any ruler to measure for this about one VPS can only install what what what service ? because of my resource is limited =.@ may i know if it is install in this way,maybe it seem like defeat the way of "VPS"... what will happen ? or any guideline on this issue (fully configuring the window server 2008 R2) ? Thx for reply

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET MVC 2 Released

    - by ScottGu
    I’m happy to announce that the final release of ASP.NET MVC 2 is now available for VS 2008/Visual Web Developer 2008 Express with ASP.NET 3.5.  You can download and install it from the following locations: Download ASP.NET MVC 2 using the Microsoft Web Platform Installer Download ASP.NET MVC 2 from the Download Center The final release of VS 2010 and Visual Web Developer 2010 will have ASP.NET MVC 2 built-in – so you won’t need an additional install in order to use ASP.NET MVC 2 with them.  ASP.NET MVC 2 We shipped ASP.NET MVC 1 a little less than a year ago.  Since then, almost 1 million developers have downloaded and used the final release, and its popularity has steadily grown month over month. ASP.NET MVC 2 is the next significant update of ASP.NET MVC. It is a compatible update to ASP.NET MVC 1 – so all the knowledge, skills, code, and extensions you already have with ASP.NET MVC continue to work and apply going forward. Like the first release, we are also shipping the source code for ASP.NET MVC 2 under an OSI-compliant open-source license. ASP.NET MVC 2 can be installed side-by-side with ASP.NET MVC 1 (meaning you can have some apps built with V1 and others built with V2 on the same machine).  We have instructions on how to update your existing ASP.NET MVC 1 apps to use ASP.NET MVC 2 using VS 2008 here.  Note that VS 2010 has an automated upgrade wizard that can automatically migrate your existing ASP.NET MVC 1 applications to ASP.NET MVC 2 for you. ASP.NET MVC 2 Features ASP.NET MVC 2 adds a bunch of new capabilities and features.  I’ve started a blog series about some of the new features, and will be covering them in more depth in the weeks ahead.  Some of the new features and capabilities include: New Strongly Typed HTML Helpers Enhanced Model Validation support across both server and client Auto-Scaffold UI Helpers with Template Customization Support for splitting up large applications into “Areas” Asynchronous Controllers support that enables long running tasks in parallel Support for rendering sub-sections of a page/site using Html.RenderAction Lots of new helper functions, utilities, and API enhancements Improved Visual Studio tooling support You can learn more about these features in the “What’s New in ASP.NET MVC 2” document on the www.asp.net/mvc web-site.  We are going to be posting a lot of new tutorials and videos shortly on www.asp.net/mvc that cover all the features in ASP.NET MVC 2 release.  We will also post an updated end-to-end tutorial built entirely with ASP.NET MVC 2 (much like the NerdDinner tutorial that I wrote that covers ASP.NET MVC 1).  Summary The ASP.NET MVC team delivered regular V2 preview releases over the last year to get feedback on the feature set.  I’d like to say a big thank you to everyone who tried out the previews and sent us suggestions/feedback/bug reports.  We hope you like the final release! Scott

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • Designing a completely new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my company that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio button control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionally done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that running, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our business is growing and our current ancient solution just can't keep up, and I'd hate to see our business go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endeavor, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole.

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • fatal error C1034: windows.h: no include path set

    - by nathan
    OS Windows Vista Ultimate trying to run a program called minimal.c when i type at command line C:\Users\nathan\Desktopcl minimal.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. minimal.c minimal.c(5) : fatal error C1034: windows.h: no include path set i have set all the paths: C:\Users\nathan\Desktoppath PATH=C:\Program Files (x86)\Microsoft Visual Studio 8\VC\bin;C:\Windows\system3 ;C:\Windows;C:\Windows\System32\Wbem;C:\Program Files (x86)\ATI Technologies\AT .ACE\Core-Static;C:\Program Files\Intel\DMIX;c:\Program Files (x86)\Microsoft S L Server\100\Tools\Binn\;c:\Program Files (x86)\Microsoft SQL Server\100\DTS\Bi n\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Java\jdk1. .0_13\bin;C:\Program Files (x86)\Autodesk\Backburner\;C:\Program Files (x86)\Co mon Files\Autodesk Shared\;C:\Program Files (x86)\Microsoft DirectX SDK (March 009)\Include;C:\Users\nathan\Desktop\glut-3.7.6-bin\glut-3.7.6-bin;C:\Program F les (x86)\Microsoft Visual Studio 8\Common7\IDE;C:\Program Files (x86)\Microsof Visual Studio 8\VC\PlatformSDK\Include;C:\Program Files (x86)\Microsoft Visual Studio 8\VC\PlatformSDK\Include\gl i have gone and made sure windows.h is in the directory im setting the path too. its in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\PlatformSDK\Include. i have visual studio 2005 i have exhausted all possiblies any ideas

    Read the article

  • What does "single-bit ECC errors were detected on the RAID controller" mean?

    - by jsp
    I have a Dell T7600 with a Perc H710P RAID controller and 4 attached 3TB drives. Over the past few months the RAID controller has been intermittently reporting errors on boot: "no boot device found", "adapter at baseport is not responding", disks frequently reported as missing or failed. I have since replaced the RAID controller, the 4 hard drives, and finally the system's motherboard. After replacing the motherboard and rebooting a few times, I got the error Single bit ECC errors were detected on the RAID controller. Please contact technical support to resolve this issue. After rebooting about 20 more times, I haven't seen the ECC error. The system seems otherwise OK, except for the fact that the disk fans will sometimes start blowing at full blast when the the system is sitting completely idle and not stop until I reboot. Are the ECC errors in memory on the RAID controller? Or, does the RAID controller map in system memory, and the ECC errors are really in system memory? Or, are the ECC errors in the 1GB cache that resides in the RAID controller?

    Read the article

  • Reinserted a RAID disk. Defined as foreign. Is import or clear the correct choice?

    - by Petrus
    I have re-inserted a RAID disk, on a DELL server with Windows Server 2008. The drive-status indicator was changing between a green and amber light, and the monitor gave the following message: There are offline or missing virtual drives with preserved cache. Please check the cables and ensure that all drives are present. Press any key to enter the configuration utility. I pressed a key and the PERC 6/I Integrated BIOS Configuration Utility showed that the RAID Status for that disk was Offline. After reinsertion of the disk the monitor is giving the following message: Foreign configuration(s) found on adapter. Press any key to continue or ‘C’ load the configuration utility, or press ‘F’ to import foreign configuration(s) and continue. After checking around on the net I am uncertain if I should choose import or clear. I cannot find out if an import means importing information from the array/system to the now foreign disk or the other way, i.e. importing information from the foreign disk to the array/system that was actually working fine. Also; if clear is a necessary thing to do ahead of a rebuild of that disk, or if clear means to clear the system to somehow make it ready to import the information from the foreign disk to the array/system, which is not what I want. I imagine that making the wrong choice here might be fatal. Please help clearing this out by telling what to choose and why.

    Read the article

  • Moving Farm to co-location hosting - network settings requirements

    - by Saariko
    I am moving my farm (2 Dell's R620) to a co-location hosting service. I am trying to figure out the secure way to have my network settings The requirements are: VM1 is the working HOST, includes: esxi 5.1, vSphere, 4 clients (w2008r2 all) VM2 has esxi 5.1 installed, and a single machine with Veeam Backup and copy 6.5 - keeping a copy of VM1 clients on the VM2 internal storage (this solution is due to a very small budget - in case of failure on Host 1 - can redirect IP's) Only 2 VM clients require network address and access from the WWAN - ISP provides IP's range for them (with Gateway and DNS) I need connection to the iDrac's from my office (option to create a VPN-SSL tunnel) Connection to the vSphere appliances I want to be able to RDP to the VM clients The current configuration is that each host has the iDrac dedicated nic connected , and another (NIC #1) connected - with a static IP on 192.168.3.x The iDrac's have a static IP from the same network range (19.168.3.x) It will look something like this: My thoughts: On NIC#2 of both hosts I will connected a crossed cable I will give each VM clients that needs internet access a 2ndry VM network with the assigned IP from the ISP open only to web - can not access from the My Question: Should I give IP's (external) to the machines who DO NOT require WWAN Access? - I can't see a way to RDP to them directly if not. Should I use the crossed cable? or just plug NIC #2 to the switch? Will this setup even work? What do I need to verify? What Virtual nic's and/or switches should I create on the Hosts?

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Why can't I physically access my machine after a remote session?

    - by Steve Crane
    I have a Dell Optiplex 960 desktop running Windows 7 64-bit at work. I typically leave it locked rather than logged off when I go home, so that I'm able to remote in from home and continue working if I wish. This is where the problem comes in. If I don't remote in there is no problem and I can simply unlock the next morning. It's when I do remote in that I have a problem. Remote sessions work as expected but when I get to work the next morning the machine appears to have gone into a sleep or hibernate state, from which no amount of mouse moving or keyboard pounding will wake it. The machine is not hanging as remote sessions to it are still possible; it seems that physical access from it's own mouse and keyboard are lost. The only way to gain access is to press and hold the power switch for several seconds until the machine shuts down. Of course this means Windows does not gracefully shut down and after powering up it takes several minutes for the machine to boot and reach the login prompt; presumably while it checks the disk. Has anyone else seen something like this?

    Read the article

  • Which type RAM support Our Servers?

    - by Mikunos
    I need to increase the RAM in our DELL servers but with the lshw I cannot see if the RAM installed is a UDIMM or RDIMM. Handle 0x1100, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A1 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244850B Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1101, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A2 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244855D Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 Handle 0x1102, DMI type 17, 28 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 2 Locator: DIMM_A3 Bank Locator: Not Specified Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz (0.8 ns) Manufacturer: 00CE00B380CE Serial Number: 8244853E Asset Tag: 02103961 Part Number: M393B5773CH0-CH9 how have we do to know which is the right RAM memory to buy? thanks

    Read the article

  • PowerConnect 3548p SNTP and web interface not working

    - by Force Flow
    I have been unable to get SNTP and access to the web interface working properly on a Dell PowerConnect 3548p. In the logs, this message appears over and over again: 04-Jan-2000 20:19:29 :%MNGINF-W-ACL: Management ACL drop packet received on interface Vlan 172 from 172.17.0.3 to 172.18.0.10 protocol 17 service Snmp 172 is the management vlan. 172.17.0.3 is the DNS server 172.18.0.10 is the switch's IP address. The DNS server and the switch are located on different subnets and separated by routers. I am unable to access the web interface of the switch from the 172.17.x.x subnet. I can only access the web interface of the switch if I am accessing it from the 172.18.x.x subnet. There is also a managed linksys switch on the 172.18.x.x subnet on the 172 vlan, which has no problem with SNTP. I can also access it from the 172.17.x.x network. So, it stands to reason that this is not a firewall or routing issue, but with the 3548p switch. I suspect the issue is with management permissions/ACLs on the 3348p switch, but that's about as much as I've been able to determine so far. Any ideas?

    Read the article

  • CD/DVD Drive not detecting locally burned CD/DVDs, but works fine with Genuine discs.

    - by Rahul
    I'm using Dell Inspiron 1420 - 32 Bit - Windows Vista, since 2.5 years. I'm facing a strange problem with my CD/DVD-drive. I cannot run/play a CD/DVD which I get burned from my friends. But when I insert Genuine CD, I'm able to play/run it. And when I try to install my Vista package which I got with my notebook, the CD/DVD gets loaded. If I insert a CD/DVD which I get from my friend, CD doesn't get loaded and the system gets hanged. But all these CDs/DVDs work on other systems. I've tested it on many of my friends PCs. So, now I'm able to run only genuine CDs & a few genuine DVDs. My Experience/Experiments: I tried to install Windows Vista using Genuine DVD - It worked I tried to install Ubuntu which I got from shipped from Canonical Ltd. - It worked I tried to install OpenSUSE .iso file burned to a DVD in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs(Tested in 4 other PCs) Tried to play a DVD containing movies, burned in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs Any help/suggestions would be appreciated. Thanks.

    Read the article

  • Windows 8 Fails to install with corrupted graphics

    - by Andy
    I am trying to install Windows 8 Pro Upgrade via the download method on an older PC (07-08). It is a Dell Dimension E521 but it has been upgraded with a 3.0 Ghz Dual Core AMD Processor, 120 GB SSD, and 4 GB of RAM. The Windows 8 upgrade assistant did not detect any issues or concerns with upgrading other than I don't have DVD software installed. The system install Windows 8 but on the first boot, corrupt graphics are present. Eventually, the monitor will go into sleep mode and then roll back to Windows 7 Pro X64 which runs fine. I wouldn't be upset over not being able to install Windows 8, but I already paid for the software since I thought there would be no issues upgrading. The Graphics card in the system is the Geforce 7300LE and it has the latest NVidia drivers for Windows 7 loaded. I saw this solution which is similar to my problem: Corrupt graphics during Windows 8 installation However, I have downloaded Windows 8 and I am not sure how to go about modifying the install that resides somewhere on the hard drive. Thanks in advance for any assistance.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >