Search Results

Search found 12058 results on 483 pages for 'abstract syntax tree'.

Page 121/483 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Trouble installing gnome-shell-extensions-user-theme, dependency/PPA conflict?

    - by Drex
    I installed gnome tweak tool, and am trying to set up custom themes and whatnot. So, trying to install gnome-shell-extensions-user-theme. me@computer:~$ sudo apt-get install gnome-shell-extensions-user-theme [sudo] password for me: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gnome-shell-extensions-user-theme : Depends: gnome-shell-extensions-common but it is not going to be installed E: Unable to correct problems, you have held broken packages. Not going to be installed? Okay, let's see about that... me@computer:~$ sudo apt-get install gnome-shell-extensions-common Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell-extensions-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Wait, what? Broken packages? Ruh Roh! Seems to me it might be a PPA contradiction problem or something, but I'm tired of trashing my installs. Kinda lost here. Any ideas? Output of sudo apt-get install -f drex@U110:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • Seek first to understand, then to be understood

    - by BuckWoody
    One of the most important (and most difficult) lessons for a technical professional to learn is to not jump to the solution. Perhaps you’ve done this, or had it happen to you. As the person you’re “listening” to is speaking, your mind is performing a B-Tree lookup on possible solutions, and when the final node of the B-Tree in your mind is reached, you blurt out the “only” solution there is to the problem, whether they are done or not. There are two issues here – both of them fatal if you don’t factor them in. First, your B-Tree may not be complete, or correct. That of course leads to an incorrect response, which blows your credibility. People will not trust you if this happens often. The second danger is that the person may modify their entire problem with a single word or phrase. I once had a client explain a detailed problem to me – and I just KNEW the answer. Then they said at the end “well, that’s what it used to do, anyway. Now it doesn’t do that anymore.” Which of course negated my entire solution – happily I had kept my mouth shut until they finished. So practice listening, rather than waiting for your turn to speak. Let the person finish, let them get the concept out, give them your full attention. They’ll appreciate the courtesy, you’ll look more intelligent, and you both may find the right answer to the problem. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • I need a decent alternative to c++ [closed]

    - by wxiiir
    I've learned php and c++, i will list the things i liked and didn't liked on each of them, how i decided to learn them in the first place and why i feel the need to learn a decent alternative to c++, i'm not a professional programmer and only do projects for myself. PHP - Decided to learn because i wanted to build a dynamic website, that i did and turned out very good, i even coded a 'not so basic' search engine for it that would display the results 'google style' and really fast, pretty cool stuff. PROS - Pretty consistent syntax for all stuff (minor caveats), great functionality, a joy for me to code in it (it seems to 'know' what i want it to do and just does it) CONS - Painfully slow for number crunching (which takes me to c++ that i only learned because i wanted to do some number crunching and it had to be screaming fast) C++ - Learned because number crunching was so slow in php and manipulating large amounts of data was very difficult, i thought, it's popular programming language and all, and tests show that it's fast, the basic stuff resemble php so it shouldn't be hard to pick up PROS - It can be used to virtually anything, very very fast CONS - Although fun to code at the start, if i need to do something out of the ordinary, memory allocation routines, pointer stuff, stack sizes etc... will get me tired really quick, syntax is a bit inconsistent some times (more caveats) I guess that from what i wrote you guys will understand what i'm looking for, there are thousands of languages out there, it's likely that one of them will suit my needs, i've been seeing stuff today and a friend of mine that is a professional programmer tried OCaml and Fortran and said that both are fast for numerical stuff, i've been inclined to test Fortran, but i need some more input because i want to have some other good 'candidates' to choose from, for example the python syntax seemed great to me, but then i found out from some tests that it was a lot slower than c++ and i simply don't want to twiddle my thumbs all day.

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory consumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory comsumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paridigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • How to generate and encode (for use in GA), random, strict, binary rooted trees with N leaves?

    - by Peter Simon
    First, I am an engineer, not a computer scientist, so I apologize in advance for any misuse of nomenclature and general ignorance of CS background. Here is the motivational background for my question: I am contemplating writing a genetic algorithm optimizer to aid in designing a power divider network (also called a beam forming network, or BFN for short). The BFN is intended to distribute power to each of N radiating elements in an array of antennas. The fraction of the total input power to be delivered to each radiating element has been specified. Topologically speaking, a BFN is a strictly binary, rooted tree. Each of the (N-1) interior nodes of the tree represents the input port of an unequal, binary power splitter. The N leaves of the tree are the power divider outputs. Given a particular power divider topology, one is still free to map the power divider outputs to the array inputs in an arbitrary order. There are N! such permutations of the outputs. There are several considerations in choosing the desired ordering: 1) The power ratio for each binary coupler should be within a specified range of values. 2) The ordering should be chosen to simplify the mechanical routing of the transmission lines connecting the power divider. The number of ouputs N of the BFN may range from, say, 6 to 22. I have already written a genetic algorithm optimizer that, given a particular BFN topology and desired array input power distribution, will search through the N! permutations of the BFN outputs to generate a design with compliant power ratios and good mechanical routing. I would now like to generalize my program to automatically generate and search through the space of possible BFN topologies. As I understand it, for N outputs (leaves of the binary tree), there are $C_{N-1}$ different topologies that can be constructed, where $C_N$ is the Catalan number. I would like to know how to encode an arbitrary tree having N leaves in a way that is consistent with a chromosomal description for use in a genetic algorithm. Also associated with this is the need to generate random instances for filling the initial population, and to implement crossover and mutations operators for this type of chromosome. Any suggestions will be welcome. Please minimize the amount of CS lingo in your reply, since I am not likely to be acquainted with it. Thanks in advance, Peter

    Read the article

  • Idiomatic use of auto_ptr to transfer ownership to a container

    - by heycam
    I'm refreshing my C++ knowledge after not having used it in anger for a number of years. In writing some code to implement some data structure for practice, I wanted to make sure that my code was exception safe. So I've tried to use std::auto_ptrs in what I think is an appropriate way. Simplifying somewhat, this is what I have: class Tree { public: ~Tree() { /* delete all Node*s in the tree */ } void insert(const string& to_insert); ... private: struct Node { ... vector<Node*> m_children; }; Node* m_root; }; template<T> void push_back(vector<T*>& v, auto_ptr<T> x) { v.push_back(x.get()); x.release(); } void Tree::insert(const string& to_insert) { Node* n = ...; // find where to insert the new node ... push_back(n->m_children, auto_ptr<Node>(new Node(to_insert)); ... } So I'm wrapping the function that would put the pointer into the container, vector::push_back, and relying on the by-value auto_ptr argument to ensure that the Node* is deleted if the vector resize fails. Is this an idiomatic use of auto_ptr to save a bit of boilerplate in my Tree::insert? Any improvements you can suggest? Otherwise I'd have to have something like: Node* n = ...; // find where to insert the new node auto_ptr<Node> new_node(new Node(to_insert)); n->m_children.push_back(new_node.get()); new_node.release(); which kind of clutters up what would have been a single line of code if I wasn't worrying about exception safety and a memory leak. (Actually I was wondering if I could post my whole code sample (about 300 lines) and ask people to critique it for idiomatic C++ usage in general, but I'm not sure whether that kind of question is appropriate on stackoverflow.)

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET Web API Exception Handling

    - by Fredrik N
    When I talk about exceptions in my product team I often talk about two kind of exceptions, business and critical exceptions. Business exceptions are exceptions thrown based on “business rules”, for example if you aren’t allowed to do a purchase. Business exceptions in most case aren’t important to log into a log file, they can directly be shown to the user. An example of a business exception could be "DeniedToPurchaseException”, or some validation exceptions such as “FirstNameIsMissingException” etc. Critical Exceptions are all other kind of exceptions such as the SQL server is down etc. Those kind of exception message need to be logged and should not reach the user, because they can contain information that can be harmful if it reach out to wrong kind of users. I often distinguish business exceptions from critical exceptions by creating a base class called BusinessException, then in my error handling code I catch on the type BusinessException and all other exceptions will be handled as critical exceptions. This blog post will be about different ways to handle exceptions and how Business and Critical Exceptions could be handled. Web API and Exceptions the basics When an exception is thrown in a ApiController a response message will be returned with a status code set to 500 and a response formatted by the formatters based on the “Accept” or “Content-Type” HTTP header, for example JSON or XML. Here is an example:   public IEnumerable<string> Get() { throw new ApplicationException("Error!!!!!"); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response message will be: HTTP/1.1 500 Internal Server Error Content-Length: 860 Content-Type: application/json; charset=utf-8 { "ExceptionType":"System.ApplicationException","Message":"Error!!!!!","StackTrace":" at ..."} .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The stack trace will be returned to the client, this is because of making it easier to debug. Be careful so you don’t leak out some sensitive information to the client. So as long as you are developing your API, this is not harmful. In a production environment it can be better to log exceptions and return a user friendly exception instead of the original exception. There is a specific exception shipped with ASP.NET Web API that will not use the formatters based on the “Accept” or “Content-Type” HTTP header, it is the exception is the HttpResponseException class. Here is an example where the HttpReponseExcetpion is used: // GET api/values [ExceptionHandling] public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError)); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The response will not contain any content, only header information and the status code based on the HttpStatusCode passed as an argument to the HttpResponseMessage. Because the HttpResponsException takes a HttpResponseMessage as an argument, we can give the response a content: public IEnumerable<string> Get() { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("My Error Message"), ReasonPhrase = "Critical Exception" }); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will have the following response:   HTTP/1.1 500 Critical Exception Content-Length: 5 Content-Type: text/plain; charset=utf-8 My Error Message .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The Content property of the HttpResponseMessage doesn’t need to be just plain text, it can also be other formats, for example JSON, XML etc. By using the HttpResponseException we can for example catch an exception and throw a user friendly exception instead: public IEnumerable<string> Get() { try { DoSomething(); return new string[] { "value1", "value2" }; } catch (Exception e) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Adding a try catch to every ApiController methods will only end in duplication of code, by using a custom ExceptionFilterAttribute or our own custom ApiController base class we can reduce code duplicationof code and also have a more general exception handler for our ApiControllers . By creating a custom ApiController’s and override the ExecuteAsync method, we can add a try catch around the base.ExecuteAsync method, but I prefer to skip the creation of a own custom ApiController, better to use a solution that require few files to be modified. The ExceptionFilterAttribute has a OnException method that we can override and add our exception handling. Here is an example: using System; using System.Diagnostics; using System.Net; using System.Net.Http; using System.Web.Http; using System.Web.Http.Filters; public class ExceptionHandlingAttribute : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { if (context.Exception is BusinessException) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(context.Exception.Message), ReasonPhrase = "Exception" }); } //Log Critical errors Debug.WriteLine(context.Exception); throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("An error occurred, please try again or contact the administrator."), ReasonPhrase = "Critical Exception" }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: Something to have in mind is that the ExceptionFilterAttribute will be ignored if the ApiController action method throws a HttpResponseException. The code above will always make sure a HttpResponseExceptions will be returned, it will also make sure the critical exceptions will show a more user friendly message. The OnException method can also be used to log exceptions. By using a ExceptionFilterAttribute the Get() method in the previous example can now look like this: public IEnumerable<string> Get() { DoSomething(); return new string[] { "value1", "value2" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use the an ExceptionFilterAttribute, we can for example add the ExceptionFilterAttribute to our ApiControllers methods or to the ApiController class definition, or register it globally for all ApiControllers. You can read more about is here. Note: If something goes wrong in the ExceptionFilterAttribute and an exception is thrown that is not of type HttpResponseException, a formatted exception will be thrown with stack trace etc to the client. How about using a custom IHttpActionInvoker? We can create our own IHTTPActionInvoker and add Exception handling to the invoker. The IHttpActionInvoker will be used to invoke the ApiController’s ExecuteAsync method. Here is an example where the default IHttpActionInvoker, ApiControllerActionInvoker, is used to add exception handling: public class MyApiControllerActionInvoker : ApiControllerActionInvoker { public override Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext actionContext, System.Threading.CancellationToken cancellationToken) { var result = base.InvokeActionAsync(actionContext, cancellationToken); if (result.Exception != null && result.Exception.GetBaseException() != null) { var baseException = result.Exception.GetBaseException(); if (baseException is BusinessException) { return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Error" }); } else { //Log critical error Debug.WriteLine(baseException); return Task.Run<HttpResponseMessage>(() => new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent(baseException.Message), ReasonPhrase = "Critical Error" }); } } return result; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can register the IHttpActionInvoker with your own IoC to resolve the MyApiContollerActionInvoker, or add it in the Global.asax: GlobalConfiguration.Configuration.Services.Remove(typeof(IHttpActionInvoker), GlobalConfiguration.Configuration.Services.GetActionInvoker()); GlobalConfiguration.Configuration.Services.Add(typeof(IHttpActionInvoker), new MyApiControllerActionInvoker()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   How about using a Message Handler for Exception Handling? By creating a custom Message Handler, we can handle error after the ApiController and the ExceptionFilterAttribute is invoked and in that way create a global exception handler, BUT, the only thing we can take a look at is the HttpResponseMessage, we can’t add a try catch around the Message Handler’s SendAsync method. The last Message Handler that will be used in the Wep API pipe-line is the HttpControllerDispatcher and this Message Handler is added to the HttpServer in an early stage. The HttpControllerDispatcher will use the IHttpActionInvoker to invoke the ApiController method. The HttpControllerDipatcher has a try catch that will turn ALL exceptions into a HttpResponseMessage, so that is the reason why a try catch around the SendAsync in a custom Message Handler want help us. If we create our own Host for the Wep API we could create our own custom HttpControllerDispatcher and add or exception handler to that class, but that would be little tricky but is possible. We can in a Message Handler take a look at the HttpResponseMessage’s IsSuccessStatusCode property to see if the request has failed and if we throw the HttpResponseException in our ApiControllers, we could use the HttpResponseException and give it a Reason Phrase and use that to identify business exceptions or critical exceptions. I wouldn’t add an exception handler into a Message Handler, instead I should use the ExceptionFilterAttribute and register it globally for all ApiControllers. BUT, now to another interesting issue. What will happen if we have a Message Handler that throws an exception?  Those exceptions will not be catch and handled by the ExceptionFilterAttribute. I found a  bug in my previews blog post about “Log message Request and Response in ASP.NET WebAPI” in the MessageHandler I use to log incoming and outgoing messages. Here is the code from my blog before I fixed the bug:   public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If a ApiController throws a HttpResponseException, the Content property of the HttpResponseMessage from the SendAsync will be NULL. So a null reference exception is thrown within the MessageHandler. The yellow screen of death will be returned to the client, and the content is HTML and the Http status code is 500. The bug in the MessageHandler was solved by adding a check against the HttpResponseMessage’s IsSuccessStatusCode property: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); byte[] responseMessage; if (response.IsSuccessStatusCode) responseMessage = await response.Content.ReadAsByteArrayAsync(); else responseMessage = Encoding.UTF8.GetBytes(response.ReasonPhrase); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we don’t handle the exceptions that can occur in a custom Message Handler, we can have a hard time to find the problem causing the exception. The savior in this case is the Global.asax’s Application_Error: protected void Application_Error() { var exception = Server.GetLastError(); Debug.WriteLine(exception); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I would recommend you to add the Application_Error to the Global.asax and log all exceptions to make sure all kind of exception is handled. Summary There are different ways we could add Exception Handling to the Wep API, we can use a custom ApiController, ExceptionFilterAttribute, IHttpActionInvoker or Message Handler. The ExceptionFilterAttribute would be a good place to add a global exception handling, require very few modification, just register it globally for all ApiControllers, even the IHttpActionInvoker can be used to minimize the modifications of files. Adding the Application_Error to the global.asax is a good way to catch all unhandled exception that can occur, for example exception thrown in a Message Handler.   If you want to know when I have posted a blog post, you can follow me on twitter @fredrikn

    Read the article

  • Abstraction: The War between solving the problem and a general solution.

    - by Bryan Harrington
    As a programmer, I find myself in the dilemma where I want make my program as abstract and as general as possible. Doing so usually would allow me to reuse my code and have a more general solution for a problem that might (or might not) come up again. Then this voice in my head says, just solve the problem dummy its that easy! Why spend more time than you have to? We all have indeed faced this question where Abstraction is on your right shoulder and Solve-it-stupid sits on the left. Which to listen to and how often? What is your strategy for this? Should you abstract everything?

    Read the article

  • Speaking at SPTechCon SF 2011 and SPSNOLA 2011

    - by Brian Jackett
    From Feb 7th-9th I’ll be presenting two sessions at SPTechCon San Francisco 2011.  My first presentation is a new session called “The Expanding Developer Toolbox for SharePoint 2010” which covers many of the new tools and functionality available to SharePoint 2010 developers.  My second sessions is called “Real World Deployment of SharePoint 2007 Solutions” (presented at last SPTech Con Boston) which covers tips, tricks, and advice on deploying SharePoint 2007 solutions.  If you hurry you may still be able to register for this SPTechCon.  Click here for registration information.  Hope to see you there.     In addition to SPTechCon, I’ll also be speaking at SharePoint Saturday New Orleans 2011 on Feb 26th.  My presentation is called “Managing SharePoint 2010 Farms with PowerShell”.  I’ve given this presentation at a number of recent conferences and it has been popular.  I’m excited for this weekend as well since it will be my first time visiting New Orleans.  Click here for registration information.   Sessions Where: SPTech Con San Francisco 2011 Title: The Expanding Developer Toolbox for SharePoint 2010 Audience and Level: Developer, Beginner/Intermediate Abstract: LINQ to SharePoint, native Visual Studio 2010 support, easier access to logging, Business Connectivity Services… The list of new features and tools available to developers rapidly grew between SharePoint 2007 and 2010.  In this session we will cover these and many of the other newest features added for SharePoint developers to utilize.  This session is targeted to SharePoint 2007 developers upgrading their skills to SharePoint 2010 or developers new to SharePoint 2010.   Where: SPTech Con San Francisco 2011 Title: Real World Deployment of SharePoint 2007 Solutions Audience and Level: Admin/Developer, Intermediate Abstract: “All I have to do is run some STSADM commands to deploy my SharePoint solutions, right?”  If you are saying that to yourself then you are missing out on some of the more advanced processes you can employ to deploy and maintain your SharePoint solutions and farm.  In this session we will cover lessons learned from 3 years of deploying and automating SharePoint solutions.  This will include using a combination of STSADM, PowerShell, SharePoint API and a number of other tools in a real world situation to deploy an entire suite of custom SharePoint solutions.  This session is targeted to farm administrators and developers.  Prior experience with SharePoint solutions, STSADM and minimal PowerShell experience is suggested.   Where: SharePoint Saturday New Orleans Title: Managing SharePoint 2010 Farms with PowerShell Audience and Level: Admin, Beginner Abstract: Having you been using STSADM (or worse hand editing processes) to manage your SharePoint 2007 farms? Are you hearing about needing to learn PowerShell to manage SharePoint 2010 farms? This session will serve as part introduction to PowerShell and part overview of how you can use PowerShell to more efficiently and effectively manage your SharePoint 2010 farm. This session is targeted to farm administrators and IT pros and no previous experience with PowerShell is required.         -Frog Out

    Read the article

  • Speaking at PASS 2012… Exciting and Scary… As usual…

    - by drsql
    Edit: As I reread this, I felt I should clarify.. As usual refers mostly to the "Scary" part. I have a lot of stage fright that I have to work through. And it is always exciting to be picked. I have been selected this year at the PASS Summit 2012 to do two sessions, and they are both going to be interesting. Pre-Con: Relational Database Design Workshop - Abstract Triggers: Born Evil or Misunderstood? - Abstract The pre-con session entitled Relational Database Design Workshop will be (at least) the...(read more)

    Read the article

  • Silverlight Cream for February 10, 2011 -- #1045

    - by Dave Campbell
    In this Issue: Mark Monster, Jaime Rodriguez, Mark Hopkins, WindowsPhoneGeek, David Anson, Jesse Liberty, Jeremy Likness, Martin Krüger(-2-), Beth Massi, Joost van Schaik, Laurent Bugnion, and Arik Poznanski. Above the Fold: Silverlight: "Parsing the Visual Tree with LINQ" Jeremy Likness WP7: "Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively" David Anson Lightswitch: "How to Send Automated Appointments from a LightSwitch Application" Beth Massi Shoutouts: Be sure to visit SilverlightShow... check out their top hits last week: SilverlightShow for Jan 31- Feb 06, 2011 Jaime Rodriguez has a post up that all the WP7 folks will be interested in: FAQ about copy paste functionality in upcoming release From SilverlightCream.com: Make use of WCF FaultContracts in Silverlight clients Mark Monster takes a shot at answering “The remote server returned an error: NotFound” while connecting to a WCF Service problem we all see. Communication between HTML in WebBrowser and Silverlight app Jaime Rodriguez responds to questions he received about communication between HTML and SIlverlight with this post about the bi-directional communication between the control and HTML. WP7 - Real Apps, Real Code Mark Hopkins has a post up about some WP7 starter kits that you can get all the source for and actually download the app from the Marketplace first to see if it interests you! WP7 AboutPrompt in depth WindowsPhoneGeek has this cool post up about the AboutPrompt from the Coding4Fun toolkit in detail... great diagrams showing where all the elements are and code examples with images. Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively David Anson describes why he took it upon himself to write his own png encoder for Silverlight... and we all thank him for doing so and providing us with the code! Navigation 101–Cancelling Navigation Jesse Liberty's latest WP7 From Scratch episode is up (number 32), and he's talking about Navigation and how to cancel it if you need to. Parsing the Visual Tree with LINQ Jeremy Likness demonstrates using LINQ to rat out information in the visual tree of your XAML. To Quote Jeremy: "you can easily check for intersections between elements and find any type of element no matter how deep within the tree it is". SpriteAnimationBehavior Martin Krüger has a couple more fun things in the Expression Gallery that I haven't discussed. First up is a behavior that animates up to 999 images and lets you control the FramesPerSecond... great demo on the ExpressionGallery to play with. Second alternative: Storyboard should not start before the Silverlight application is loaded Martin Krüger's latest is a way to programmatically wait for the Loaded event so that you know you can let your animations fly. How to Send Automated Appointments from a LightSwitch Application Beth Massi's latest Lightswitch post follows up her Outlook automation one with sending appointments using the standard iCalendar format... all the code included of course. The case for the Bindable Application Bar for Windows Phone 7 Joost van Schaik posts about a bindable Application Bar for your WP7 apps... grab the code and don't leave home without it :) MVVM Light V4 preview (BL0014) release notes Laurent Bugnion posted an update to MVVMLight to Codeplex a couple days ago. This is an early preview of what he plans on having in version 4, so check out the post for what's new and fun. Search Digg on Windows Phone 7 Arik Poznanski followed up his RSS post from last week with this one on searching Digg on WP7... and he's discussing and providing a utility class for doing it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • SQL SERVER – OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING

    - by pinaldave
    Yesterday I had discussed two analytical functions FIRST_VALUE and LAST_VALUE. After reading the blog post I received very interesting question. “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” I find this question very interesting because this is very commonly made mistake. No there is no bug in the code. I think what we need is a bit more explanation. Let me attempt that first. Before you do that I suggest you read yesterday’s blog post as this question is related to that blog post. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: As per the reader’s question the value of the LAST_VALUE function should be always 114 and not increasing as the rows are increased. Let me re-write the above code once again with bit extra T-SQL Syntax. Please pay special attention to the ROW clause which I have added in the above syntax. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Now once again check the result of the above query. The result of both the query is same because in OVER clause the default ROWS selection is always UNBOUNDED PRECEDING AND CURRENT ROW. If you want the maximum value of the windows with OVER clause you need to change the syntax to UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING for ROW clause. Now run following query and pay special attention to ROW clause again. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Here is the resultset of the above query which is what questioner was asking. So in simple word, there is no bug but there is additional syntax needed to add to get your desired answer. The same logic also applies to PARTITION BY clause when used. Here is quick example of how we can further partition the query by SalesOrderDetailID with this new functions. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us windowed resultset on SalesOrderDetailsID as well give us FIRST and LAST value for the windowed resultset. There are lots to discuss for this two functions and we have just explored tip of the iceberg. In future post I will discover it further deep. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JavaOne 2012 Call for Papers

    - by Tori Wieldt
    JavaOne 2012 is happening Sept. 30-Oct 4 in San Francisco. The Call For Papers for this conference is now open. Java Evangelist Arun Gupta, who was on one of the selection committees and will be again this year, provided some great tips for submission (and a peek into the submission process): JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list. Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects. Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details." :-) The tracks this year are: Core Java Platform Development Tools and Techniques Emerging Langauges on the JVM Enterprise Services Architectures and the Cloud Java EE Web Profile and Platform Technologies Java ME, Java Card, Embedded, and Devices Java FX and Rich User Experiences IMPORTANT: Submit your proposal as soon as possible, the the Call for Papers closes April 9th, a mere three weeks away!  Follow these channels to get the latest news about #JavaOne 2012.  originally posted on blogs.oracle.com/javaone

    Read the article

  • JavaOne 2012 Call for Papers

    - by Tori Wieldt
    JavaOne 2012 is happening Sept. 30-Oct 4 in San Francisco. The Call For Papers for this conference is now open. Java Evangelist Arun Gupta, who was on one of the selection committees and will be again this year, provided some great tips for submission (and a peek into the submission process): JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list. Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects. Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details." :-) The tracks this year are: Core Java Platform Development Tools and Techniques Emerging Langauges on the JVM Enterprise Services Architectures and the Cloud Java EE Web Profile and Platform Technologies Java ME, Java Card, Embedded, and Devices Java FX and Rich User Experiences IMPORTANT: Submit your proposal as soon as possible, the the Call for Papers closes April 9th, a mere three weeks away!  Follow these channels to get the latest news about #JavaOne 2012. 

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >