Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 128/549 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Windows Server or Linux for final project

    - by user1433490
    A few weeks ago I came up with an idea to develop a mobile app which will direct students in my university to the nearest printer availiable. The whole thing is part of my final project. The Android based app will need to perform the following tasks: The user's location in the campus is sent to the server. Assume this part works just fine. The server sends an SNMP request to the printers in the user's vicinity. I'll probably use PHP or Python for that part. The data requested by SNMP is processed and sent back to the client My question concerns the server. The university's IT manager offered me a designated server for development, which sounds great. Now I need to choose which OS I want installed on the server - Windows server or Linux (don't know which versions). I don't have any server programming/operating experince, but generally speaking I feel more comfortable in Windows enviroment (just because that has always been my OS). I don't have much time for learning a new OS, but when does it make sense generally to host or develop server side applications on a Windows environment versus a Linux environment?

    Read the article

  • cmake doesn't work in windows XP

    - by Runner
    I'm new with cmake,just installed it and following this article: http://www.cs.swarthmore.edu/~adanner/tips/cmake.php D:\Works\c\cmake\build>cmake .. -- Building for: NMake Makefiles CMake Warning at CMakeLists.txt:2 (project): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (project) CMake Error: your RC compiler: "CMAKE_RC_COMPILER-NOTFOUND" was not found. Please set CMAKE_RC_COMPILER to a valid compiler path or name. -- Check for CL compiler version -- Check for CL compiler version - failed -- Check if this is a free VC compiler -- Check if this is a free VC compiler - yes -- Using FREE VC TOOLS, NO DEBUG available -- Check for working C compiler: cl CMake Warning at CMakeLists.txt:2 (PROJECT): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeRCInformation.cmake:22 (GET_FILENAME_COMPONENT): get_filename_component called with incorrect number of arguments Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE) D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error: CMAKE_RC_COMPILER not set, after EnableLanguage CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: Internal CMake error, TryCompile configure of cmake failed -- Check for working C compiler: cl -- broken CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeTestCCompiler.cmake:52 (MESSAGE): The C compiler "cl" is not able to compile a simple test program. It fails with the following output: CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:2 (project) CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: your CXX compiler: "cl" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. -- Configuring incomplete, errors occurred! What are the complete requirement to use cmake successfully in windows XP?I've already installed Visual Studio under D:\Tools\Microsoft Visual Studio 9.0

    Read the article

  • How do I use connect to DB2 with DBI and mod_perl?

    - by Matthew
    I'm having issues with getting DBI's IBM DB2 driver to work with mod_perl. My test script is: #!/usr/bin/perl use strict; use CGI; use Data::Dumper; use DBI; { my $q; my $dsn; my $username; my $password; my $sth; my $dbc; my $row; $q = CGI->new; print $q->header; print $q->start_html(); $dsn = "DBI:DB2:SAMPLE"; $username = "username"; $password = "password"; print "<pre>".$q->escapeHTML(Dumper(\%ENV))."</pre>"; $dbc = DBI->connect($dsn, $username, $password); $sth = $dbc->prepare("SELECT * FROM SOME_TABLE WHERE FIELD='SOMETHING'"); $sth->execute(); $row = $sth->fetchrow_hashref(); print "<pre>".$q->escapeHTML(Dumper($row))."</pre>"; print $q->end_html; } This script works as CGI but not under mod_perl. I get this error in apache's error log: DBD::DB2::dr connect warning: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /usr/lib/perl5/site_perl/5.8.8/Apache/DBI.pm line 190. DBI connect('SAMPLE','username',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /data/www/perl/test.pl line 15 First of all, why is it using ODBC? The native DB2 driver is installed (hence it works as CGI). Running Apache 2.2.3, mod_perl 2.0.4 under RHEL5. This guy had the same problem as me: http://www.mail-archive.com/[email protected]/msg22909.html But I have no idea how he fixed it. What does mod_php4 have to do with mod_perl? Any help would be greatly appreciated, I'm having no luck with google. Update: As james2vegas pointed out, the problem has something to do with PHP: I disable PHP all together I get the a different error: Total Environment allocation failure! Did you set up your DB2 client environment? I believe this error is to do with environment variables not being set up correctly, namely DB2INSTANCE. However, I'm not able to turn off PHP to resolve this problem (I need it for some legacy applications). So I now have 2 questions: How can I fix the original issue without disabling PHP all together? How can I fix the environment issue? I've set DB2INSTANCE, DB2_PATH and SQLLIB variables correctly using SetEnv and PerlSetEnv in httpd.conf, but with no luck. Note: I've edited the code to determine if the problem was to do with Global Variable Persistence.

    Read the article

  • Termite colony simulator using java

    - by ashii
    hi everyone, i hve to design a simulator that will maintain an environment, which consists of a collection of patches arranged in a rectangular grid of arbitrary size. Each patch contains zero or more wood chips. A patch may be occupied by one or more termites or predators, which are mobile entities that live within the world and behave according to simple rules. A TERMITE can pick up a wood chip from the patch that it is currently on, or drop a wood chip that it is carrying. Termites travel around the grid by moving randomly from their current patch to a neighbouring patch, in one of four possible directions. New termites may hatch from eggs, and this is simulated by the appearance of a new termite at a random patch within the environment. A PREDATOR moves in a similar way to termites, and if a predator moves onto a patch that is occupied by a termite, then the predator eats the termite. At initialization, the termites, predators, and wood chips are distributed randomly in the environment. Simulation then proceeds in a loop, and the new state of the environment is obtained at each iteration. i have designed the arena using jpanel but im not able to randomnly place wood,termite and predator in that arena. can any one help me out?? my code for the arena is as following: 01 import java.awt.*; 02 import javax.swing.*; 03 04 public class Arena extends JPanel 05 { 06 private static final int Rows = 8; 07 private static final int Cols = 8; 08 public void paint(Graphics g) 09 { 10 Dimension d = this.getSize(); 11 // don't draw both sets of squares, when you can draw one 12 // fill in the entire thing with one color 13 g.setColor(Color.WHITE); 14 // make the background 15 g.fillRect(0,0,d.width,d.height); 16 // draw only black 17 g.setColor(Color.BLACK); 18 // pick a square size based on the smallest dimension 19 int sqsize = ((d.width<d.height) ? d.width/Cols : d.height/Rows); 20 // loop for rows 21 for (int row=0; row<Rows; row++) 22 { 23 int y = row*sqsize; // y stays same for entire row, set here 24 int x = (row%2)*sqsize; // x starts at 0 or one square in 25 for (int i=0; i<Cols/2; i++) 26 { 27 // you will only be drawing half the squares per row 28 // draw square 29 g.fillRect(x,y,sqsize,sqsize); 30 // move two square sizes over 31 x += sqsize*2; 32 } 33 } 34 35 } 36 37 38 39 public void update(Graphics g) { paint(g); } 40 41 42 43 public static void main (String[] args) 44 { 45 46 JFrame frame = new JFrame("Arena"); 47 frame.setSize(600,400); 48 frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); 49 frame.setContentPane(new Arena()); 50 frame.setVisible(true); 51 } 52 53 }

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • ASP.NET AppDomain–What it is and why it’s important–Part 12 of 52 part series

    - by OWScott
    AppDomains are a silent mysterious part of ASP.NET and IIS.  It’s important for the web administrator to be aware of this building block of ASP.NET so that we can be aware of how changes to the system can affect production sites. While this series is targeted at the IIS and web administrator, this topic is useful for the ASP.NET programmer too. This is week 12 of a 52 week series on various web administration related tasks. Past and future videos can be found here.

    Read the article

  • Azure News at TechEd 2010

    - by guybarrette
    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";

    Read the article

  • Azure News at TechEd 2010

    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sun Ray 3 Plus Appliance Announced

    - by [email protected]
    There were many of you out there wondering if Oracle was going to keep and add to the Sun Ray and Sun virtualized desktop product suite, there have been a number of affirmative statements over the last many months. However, none of them resound like this; the introduction of a new product pretty much proves the point. A couple minutes before 3:00, local time yesterday, Oracle announced the release of a new Sun Ray, appliance, the Sun Ray 3 Plus. This is the unit that will replace the SR 2 FS (which has been for sale now since the middle of last decade).  Physically it is about the same size as the 2 FS but there are some significant differences... As you can see there is no smart card reader in the front - that has moved to the top to ensure only one hand is required to insert the card.  There is also a larger surround on the card reader that lights up to show the user the card is being read (properly).  A new power on/off switch is on the front which essentially brings power consumption to ~0 watts, but there is also a new 'sleep' timer looking for 30 minutes of inactivity and then will drop the power consumption down to ~ 1watt. There are also 2 USB 2.0 ports are accessible on the front instead of one.  The standard mic in and headphone out ports are there as well.  There is even more interesting stuff on the back. From the top down there are two more USB 2.0 ports for a total of four, but then the Oracle "Peripheral Kit" keyboard includes a 3-port USB Hub, too.  There's a 10/100/1000 Ethernet port as well as a 1000 Mb SFP port.  Standard DB-9 Serial port and then two DVI ports.  Then there is the really big news.  Two DVI ports driving 2560 x 1600 resolution, each. Most PCs can't do that without adding an adapter card.Now the images I have here are ones taken on a prototype a couple months back.  They are essentially the same as the Production unit, but if you would like to see an image of the Production Sun Ray 3 Plus unit you can see one here. There is a full data sheet available here. So this is the first Oracle Sun Ray desktop appliance.  Proof that the product line lives on.  A very good start!

    Read the article

  • Cursors 1 Sets 0

    - by GrumpyOldDBA
    I had an interesting experience with a database I essentially know nothing about. On the server is a database which stores session state, Microsoft provide the code/database with their dot net, so I'm told. Anyway this database has sat happily on the production server for the past 4 years I guess, we've finally made the upgrade to SQL 2008 and the ASPState database has also been upgraded. It seems most likely that the performance increase of our upgrade tipped the usage of this database into...(read more)

    Read the article

  • Crystal Reports for VS deployment to Web Application doesn't work (3 replies)

    I've followed the &quot;documentation&quot; for getting crystal reports to work on a production web site. I've migrated a VS 2003 web project to a VS 2008 web application. Everything works fine on my dev box. publish the site out to the server (2003 X86) and no go on the reports, get the infamous: ***** Error Type: System.IO.FileLoadException ***** Error Message: Could not load file or assembly 'CrystalDeci...

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Adding an expression based image in a client report definition file (RDLC)

    - by rajbk
    In previous posts, I showed you how to create a report using Visual Studio 2010 and how to add a hyperlink to the report.  In this post, I show you how to add an expression based image to each row of the report. This similar to displaying a checkbox column for Boolean values.  A sample project is attached to the bottom of this post. To start off, download the project we created earlier from here.  The report we created had a “Discontinued” column of type Boolean. We are going to change it to display an “available” icon or “unavailable” icon based on the “Discontinued” row value.    Load the project and double click on Products.rdlc. With the report design surface active, you will see the “Report Data” tool window. Right click on the Images folder and select “Add Image..”   Add the available_icon.png and discontinued_icon.png images (the sample project at the end of this post has the icon png files)    You can see the images we added in the “Report Data” tool window.   Drag and drop the available_icon into the “Discontinued” column row (not the header) We get a dialog box which allows us to set the image properties. We will add an expression that specifies the image to display based the “Discontinued” value from the Product table. Click on the expression (fx) button.   Add the following expression : = IIf(Fields!Discontinued.Value = True, “discontinued_icon”, “available_icon”)   Save and exit all dialog boxes. In the report design surface, resize the column header and change the text from “Discontinued” to “In Production”.   (Optional) Right click on the image cell (not header) , go to “Image Properties..” and offset it by 5pt from the left. (Optional) Change the border color since it is not set by default for image columns. We are done adding our image column! Compile the application and run it. You will see that the “In Production” column has red ‘x’ icons for discontinued products. Download the VS 2010 sample project NorthwindReportsImage.zip Other Posts Adding a hyperlink in a client report definition file (RDLC) Rendering an RDLC directly to the Response stream in ASP.NET MVC ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager Localization in ASP.NET MVC 2 using ModelMetadata Setting up Visual Studio 2010 to step into Microsoft .NET Source Code Running ASP.NET Webforms and ASP.NET MVC side by side Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • endpoint.tv - Troubleshooting with AppFabric

    - by The Official Microsoft IIS Site
    Troubleshooting applications in production is always a challenge. With AppFabric monitoring your workflows and services, you get great information about exactly what is happening, including notices about unhandled exceptions. In this episode, Michael McKeown will show you more about how you can use these features to troubleshoot problems with your applications. Be sure to check out the AppFabric Wiki for more great tips, and to share yours as well....( read more ) Read More......(read more)

    Read the article

  • Crystal Reports for VS deployment to Web Application doesn't work (3 replies)

    I've followed the &quot;documentation&quot; for getting crystal reports to work on a production web site. I've migrated a VS 2003 web project to a VS 2008 web application. Everything works fine on my dev box. publish the site out to the server (2003 X86) and no go on the reports, get the infamous: ***** Error Type: System.IO.FileLoadException ***** Error Message: Could not load file or assembly 'CrystalDeci...

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • Programmation concurrente en Java de Brian Goetz, critique par Eric Reboisson

    Je viens de lire "Programmation concurrente en Java" et je vous le recommande vivement.Une chose m'a particulièrement marqué : Trop peu de développeurs se soucient de la justesse de leur programme. Un peu comme pour la propreté du code (cf Clean Code), ils sont nombreux à s'arrêter dès que ça fonctionne ! Or en ce qui concerne la concurrence, les conditions limites vont s'exprimer le plus souvent en production et non en développement.Je ne dis pas qu'il faut faire systématiquement du code multithread...

    Read the article

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Open the SQL Server Error Log with PowerShell

    - by BuckWoody
    Using the Server Management Objects (SMO) library, you don’t even need to have the SQL Server 2008 PowerShell Provider to read the SQL Server Error Logs – in fact, you can use regular old everyday PowerShell. Keep in mind you will need the SMO libraries – which can be installed separately or by installing the Client Tools from the SQL Server install media. You could search for errors, store a result as a variable, or act on the returned values in some other way. Replace the Machine Name with your server and Instance Name with your instance, but leave the quotes, to make this work on your system: [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") $machineName = "UNIVAC" $instanceName = "Production" $sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "$machineName\$instanceName" $sqlServer.ReadErrorLog() Want to search for something specific, like the word “Error”? Replace the last line with this: $sqlServer.ReadErrorLog() | where {$_.Text -like "Error*"} Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Running Windows Phone Developers Tools CTP under VMWare Player - Yes you can! - But do you want to?

    - by Liam Westley
    This blog is the result of a quick investigation of running the Windows Phone Developer Tools CTP under VMWare Player.  In the release notes for Windows Phone Developer Tools CTP it mentions that it is not supported under VirtualPC or Hyper-V.  Some developers have policies where ‘no non-production code’ can be installed on their development workstation and so the only way they can use a CTP like this is in a virtual machine. The dilemma here is that the emulator for Windows Phone itself is a virtual machine and running a virtual machine within another virtual machine is normally frowned upon.  Even worse, previous Windows Mobile emulators detected they were in a virtual machine and refused to run.  Why VMWare? I selected VMWare as a possible solution as it is possible to run VMWare ESXi under VMWare Workstation by manually setting configuration options in the VMX configuration file so that it does not detect the presence of a virtual environment. I actually found that I could use VMWare Player (the free version, that can now create VM images) and that there was no need for any editing of the configuration file (I tried various switches, none of which made any difference to performance). So you can run the CTP under VMWare Player, that’s the good news. The bad news is that it is incredibly slow, bordering on unusable.  However, if it’s the only way you can use the CTP, at least this is an option. VMWare Player configuration I used the latest VMWare Player, 3.0, running under Windows x64 on my HP 6910p laptop with an Intel T7500 Dual Core CPU running at 2.2GHz, 4Gb of memory and using a separate drive for the virtual machines. I created a machine in VMWare Player with a single CPU, 1536 Mb memory and installed Windows 7 x64 from an ISO image.  I then performed a Windows Update, installed VMWare Tools, and finally the Windows Phone Developer Tools CTP After a few warnings about performance, I configured Windows 7 to run with Windows 7 Basic theme rather than use Aero (which is available under VMWare Player as it has a WDDM driver). Timings As a test I first launched Microsoft Visual Studio 2010 Express for Windows Phone, and created a default Windows Phone Application project.  I then clicked the run button, which starts the emulator and then loads the default application onto the emulator. For the second test I left the emulator running, stopped the default application, added a single button to change the page title and redeployed to the already running emulator by clicking the run button.   Test 1 (1st run) Test 2 (emulator already running)   VMWare Player 10 minutes  1 minute   Windows x64 native 1 minute  < 10 seconds   Conclusion You can run the Windows Phone Developer Tools CTP under VMWare Player, but it’s really, really slow and you would have to have very good reasons to try this approach. If you need to keep a development system free of non production code, and the two systems aren’t required to run simultaneously, then I’d consider a boot from VHD option.  Then you can completely isolate the Windows Phone Developer Tools CTP and development environment into a single VHD separate from your main development system.

    Read the article

  • GlassFish Back from Devoxx 2011 Mature Java EE 6 and EE 7 well on its way

    - by alexismp
    I'm back from my 8th (!) Devoxx conference (I don't think I've missed one since 2004) and this conference keeps delivering on the promise of a Java developer paradise week. GlassFish was covered in many different ways and I was not involved in a good number of them which can only be a good sign! Several folks asked me when my Java EE 6 session with Antonio Goncalves was scheduled (we've been covering this for the past two years in University sessions, hands-on labs and regular sessions). It turns out we didn't team up this year (Antonio was crazy busy preparing for Devoxx France) and I had a regular GlassFish session. Instead, this year, Bert Ertman and Paul Bakker covered the 3-hour Java EE 6 University session ("Duke’s Duct Tape Adventures") on the very first day (using GlassFish) with great success it seems. The Java EE 6 lab was also a hit with a full room of folks covering a lot of technical ground in 2.5 hours (with GlassFish of course). GlassFish was also mentioned during Cameron Purdy's keynote (pretty natural even if that surprised a number of folks that had not been closely following GlassFish) but also in Stephan Janssen's Keynote as the engine powering Parleys.com. In fact Stephan was a speaker in the GlassFish session describing how they went from a single-instance Tomcat setup to a clustered GlassFish + MQ environment. Also in the session was Johan Vos (of Mollom fame, along other things). Both of these customer testimonials were made possible because GlassFish has been delivering full Java EE 6 implementations for almost two years now which is plenty of time to see serious production deployments on it. The Java EE Gathering (BOF) was very well attended and very lively with many spec leads participating and discussing progress and also pain points with folks in the room. Thanks to all those attending this session, a good number of RFE's, and priority points came out of this. While this wasn't a GlassFish session by any means, it's great to have the current RESTful Admin and upcoming Java EE 7 planned features be a satisfactory answer to some of the requests from the attendance. Last but certainly not least, the GlassFish team is busy with Java EE 7 and version 4 of the product. This was discussed and shown during the Java EE keynote and in greater details in Jerome Dochez' session. If any indication, the tweets on his demo (virtualization, provisioning, etc...) were very encouraging. Java EE 6 adoption is doing great and GlassFish, being a production-quality reference implementation, is one of the first to benefit from this. And with GlassFish 4.0, we're looking at increasing the product and community adoption by offering a pragmatic technical solution to Java EE PaaS deployments. Stay tuned ! (the impatient in you is encouraged to grab a 4.0 build and provide feedback).

    Read the article

  • Stackify featured in the KC Business Journal

    - by Matt Watson
    Very excited to be in the KC Business Journal today. Stackify is focused on giving limited production access to developers to help them do application troubleshooting. We about ready to launch our product and we are looking for beta testers!Ex-VinSolutions exec pours sale proceeds into Stackify, other tech startupsRead the entire article on their website:http://www.bizjournals.com/kansascity/print-edition/2012/06/01/ex-vinsolutions-exec-pours-sale.html

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >