Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 325/336 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Using Dynamic Proxies to centralize JPA code

    - by Daziplqa
    Hi All, Actually, This is not a question but really I need your opinions in a matter... I put his post here because I know you always active, so please don't consider this a bad question and share me your opinions. I've used Java dynamic proxies to Centralize The code of JPA that I used in a standalone mode, and Here's the dynamic proxy code: package com.forat.service; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.util.logging.Level; import java.util.logging.Logger; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.EntityTransaction; import javax.persistence.Persistence; import com.forat.service.exceptions.DAOException; /** * Example of usage : * <pre> * OnlineFromService onfromService = * (OnlineFromService) DAOProxy.newInstance(new OnlineFormServiceImpl()); * try { * Student s = new Student(); * s.setName("Mohammed"); * s.setNationalNumber("123456"); * onfromService.addStudent(s); * }catch (Exception ex) { * System.out.println(ex.getMessage()); * } *</pre> * @author mohammed hewedy * */ public class DAOProxy implements InvocationHandler{ private Object object; private Logger logger = Logger.getLogger(this.getClass().getSimpleName()); private DAOProxy(Object object) { this.object = object; } public static Object newInstance(Object object) { return Proxy.newProxyInstance(object.getClass().getClassLoader(), object.getClass().getInterfaces(), new DAOProxy(object)); } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { EntityManagerFactory emf = null; EntityManager em = null; EntityTransaction et = null; Object result = null; try { emf = Persistence.createEntityManagerFactory(Constants.UNIT_NAME); em = emf.createEntityManager();; Method entityManagerSetter = object.getClass(). getDeclaredMethod(Constants.ENTITY_MANAGER_SETTER_METHOD, EntityManager.class); entityManagerSetter.invoke(object, em); et = em.getTransaction(); et.begin(); result = method.invoke(object, args); et.commit(); return result; }catch (Exception ex) { et.rollback(); Throwable cause = ex.getCause(); logger.log(Level.SEVERE, cause.getMessage()); if (cause instanceof DAOException) throw new DAOException(cause.getMessage(), cause); else throw new RuntimeException(cause.getMessage(), cause); }finally { em.close(); emf.close(); } } } And here's the link that contains more info (http://m-hewedy.blogspot.com/2010/04/using-dynamic-proxies-to-centralize-jpa.html) (plz don't consider this as adds, as I can copy and past the entire topic here if you want that) So, Please give me your opinions. Thanks.

    Read the article

  • SSRS 2008 printing single page renders different for print

    - by user270437
    I have a problem with SSRS 2008 reports rendering differently on the reporting server than the way it renders when you print the report. I’m trying to figure out to print a single page and have the print show that same records as I see on the report on the screen. As a test, I created a simple report with no headers or footers and just added a Tablix table to display the records (no groupings). My data set for this test displays 2 ¼ pages of records when I deploy it to our reporting services server and run it. If I click the print Icon and preview the report is 2 ¾ pages. I haven’t found anything searching on this so it makes me think it is something simple I’m missing. A basically want the report to render the same records on each page in Report Manager as it does when it prints, how do I accomplish this? (In response to answer posted by Chris)…If that is the case then it is disappointing. Customers are accustomed to WYSIWYG and will have a hard time understanding that, I imagine we will be getting a lot of support calls. This still leaves an issue. I tried using print preview and could not find any way to single out a page. If I select a page up front to print, or preview it renders different so I get different records. And if I preview the entire document, I can only print the entire document. You mentioned the Excel render; we have customers that will want that also. The problem I have found with Excel exports is that even a basic report winds up merging some cells and that messes up sorting. I’m going to try your tip about grouping to see if I can get a clean export to a page. It would have been nice if they would have created a property for certain controls like the tablix table called “ExcelSheet”. Then all you would have to do is give it a name and it would create a new sheet for each control with a name, the name becoming the sheet title. Thanks for the information you supplied it is very useful as I’m new to SSRS. If you know how I can Preview in print render and select individual pages to print from the render let me know. Update 02/19/2010 After testing this more I now realize it is just a bad design of Report managers print driver or a limitation because it is server based. The options work differently than Windows apps drivers, But I did find a work around. Here is the test I performed comparing Excel to Report Manager. I bring up a report that will render more than 1 page when printed. I then export to Excel, in Excel I select print preview. I can navigate the pages in preview and then select a single page like page 3. I can then print just page 3 without leaving print preview and it prints just like it rendered. I cannot do this using print in report manager. If I select print preview in report manager then try to print while in preview it always prints the entire document. However if I close out of print preview, I can then select page 3 and print it as rendered. It is just one additional step once you know what to do, but it took some time to figure it out.

    Read the article

  • Java Thread execution on same data

    - by AR89
    first of all here is the code, you can just copy an paste import java.util.ArrayList; public class RepetionCounter implements Runnable{ private int x; private int y; private int[][] matrix; private int xCounter; private int yCounter; private ArrayList<Thread> threadArray; private int rowIndex; private boolean[] countCompleted; public RepetionCounter(int x, int y, int [][]matrix) { this.x = x; this.y = y; this.matrix = matrix; this.threadArray = new ArrayList<Thread>(matrix.length); this.rowIndex = 0; for(int i = 0; i < matrix.length; i++){ threadArray.add(new Thread(this)); } countCompleted = new boolean[matrix.length]; } public void start(){ for (int i = 0; i < threadArray.size(); i++){ threadArray.get(i).start(); this.rowIndex++; } } public void count(int rowIndex) { for(int i = 0; i < matrix[rowIndex].length; i++){ if (matrix[rowIndex][i] == x){ this.xCounter++; } else if (matrix[rowIndex][i] == y){ this.yCounter++; } } } @Override public void run() { count(this.rowIndex); countCompleted[this.rowIndex] = true; } public int getxCounter() { return xCounter; } public void setxCounter(int xCounter) { this.xCounter = xCounter; } public int getyCounter() { return yCounter; } public void setyCounter(int yCounter) { this.yCounter = yCounter; } public boolean[] getCountCompleted() { return countCompleted; } public void setCountCompleted(boolean[] countCompleted) { this.countCompleted = countCompleted; } public static void main(String args[]){ int[][] matrix = {{0,2,1}, {2,3,4}, {3,2,0}}; RepetionCounter rc = new RepetionCounter(0, 2, matrix); rc.start(); boolean ready = false; while(!ready){ for(int i = 0; i < matrix.length; i++){ if (rc.getCountCompleted()[i]){ ready = true; } else { ready = false; } } } if (rc.getxCounter() > rc.getyCounter()){ System.out.println("Thre are more x than y"); } else {System.out.println("There are:"+rc.getxCounter()+" x and:"+rc.getyCounter()+" y"); } } } What I want this code to do: I give to the object a matrix and tow numbers, and I want to know how much times these two numbers occurs in the matrix. I create as many thread as the number of rows of the matrix (that' why there is that ArrayList), so in this object I have k threads (supposing k is the number of rows), each of them count the occurrences of the two numbers. The problem is: if I run it for the first time everything work, but if I try to execute it another time I get and IndexOutOfBoundException, or a bad count of the occurrences, the odd thing is that if I get the error, and modify the code, after that it will works again just for once. Can you explain to me why is this happening?

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • If I use a facade class with generic methods to access the JPA API, how should I provide additional processing for specific types?

    - by Shaun
    Let's say I'm making a fairly simple web application using JAVA EE specs (I've heard this is possible). In this app, I only have about 10 domain/data objects, and these are represented by JPA Entities. Architecturally, I would consider the JPA API to perform the role of a DAO. Of course, I don't want to use the EntityManager directly in my UI (JSF) and I need to manage transactions, so I delegate these tasks to the so-called service layer. More specifically, I would like to be able to handle these tasks in a single DataService class (often also called CrudService) with generic methods. See this article by Adam Bien for an example interface: http://www.adam-bien.com/roller/abien/entry/generic_crud_service_aka_dao My project differs from that article in that I can't use EJBs, so my service classes are essentially just named beans and I handle transactions manually. Regardless, what I want is a single interface for simple CRUD operations on my data objects because having a different class for each data type would lead to a lot of duplicate and/or unnecessary code. Ideally, my views would be able to use a method such as public <T> List<T> findAll(Class<T> type) { ... } to retrieve data. Using JSF, it might look something like this: <h:dataTable value="#{dataService.findAll(data.class)}" var="d"> ... </h:dataTable> Similarly, after validating forms, my controller could submit the data with a method such as: public <T> void add(T entity) { ... } Granted, you'd probably actually want to return something useful to the caller. In any case, this works well if your data can be treated as homogenous in this manner. Alas, it breaks down when you need to perform additional processing on certain objects before passing them on to JPA. For example, let's say I'm dealing with Books and Authors which have a many-to-many relationship. Each Book has a set of IDs referring to its authors, and each Author has a set of IDs referring to their books. Normally, JPA can manage this kind of relationship for you, but in some cases it can't (for example, the google app engine JPA provider doesn't support this). Thus, when I persist a new book for example, I may need to update the corresponding author entities. My question, then, is if there's an elegant way to handle this or if I should reconsider the sanity of my whole design. Here's a couple ways I see of dealing with it: The instanceof operator. I could use this to target certain classes when special processing is needed. Perhaps maintainability suffers and it isn't beautiful code, but if there's only 10 or so domain objects it can't be all that bad... could it? Make a different service for each entity type (ie, BookService and AuthorService). All services would inherit from a generic DataService base class and override methods if special processing is needed. At this point, you could probably also just call them DAOs instead. As always, I appreciate the help. Let me know if any clarifications are needed, as I left out many smaller details.

    Read the article

  • "undefined reference" error with namespace across multiple files

    - by user1330734
    I've looked at several related posts but no luck with this error. I receive this undefined reference error message below when my namespace exists across multiple files. If I compile only ConsoleTest.cpp with contents of Console.cpp dumped into it the source compiles. I would appreciate any feedback on this issue, thanks in advance. g++ Console.cpp ConsoleTest.cpp -o ConsoleTest.o -Wall /tmp/cc8KfSLh.o: In function `getValueTest()': ConsoleTest.cpp:(.text+0x132): undefined reference to `void Console::getValue<unsigned int>(unsigned int&)' collect2: ld returned 1 exit status Console.h #include <iostream> #include <sstream> #include <string> namespace Console { std::string getLine(); template <typename T> void getValue(T &value); } Console.cpp #include "Console.h" using namespace std; namespace Console { string getLine() { string str; while (true) { cin.clear(); if (cin.eof()) { break; // handle eof (Ctrl-D) gracefully } if (cin.good()) { char next = cin.get(); if (next == '\n') break; str += next; // add character to string } else { cin.clear(); // clear error state string badToken; cin >> badToken; cerr << "Bad input encountered: " << badToken << endl; } } return str; } template <typename T> void getValue(T &value) { string inputStr = Console::getLine(); istringstream strStream(inputStr); strStream >> value; } } ConsoleTest.cpp #include "Console.h" void getLineTest() { std::string str; std::cout << "getLinetest" << std::endl; while (str != "next") { str = Console::getLine(); std::cout << "<string>" << str << "</string>"<< std::endl; } } void getValueTest() { std::cout << "getValueTest" << std::endl; unsigned x = 0; while (x != 12345) { Console::getValue(x); std::cout << "x: " << x << std::endl; } } int main() { getLineTest(); getValueTest(); return 0; }

    Read the article

  • Problems with this code?

    - by J4C3N-14
    I'm trying to use this code which is an example taken from here https://gist.github.com/2383248 , but it is coming up with a error on the public void onClick which is Multiple markers at this line - implements android.view.View.OnClickListener.onClick - Syntax error, insert "}" to complete MethodBody, but when I add the brace it just throws another error after many tries and fails of different suggestions and ideas. It may be a syntax error and bad coding from me (just started learning to program) but does anyone have any ideas how to resolve this or point me in the right direction I would be very grateful. public class ICSCalendarActivity extends Activity implements View.OnClickListener{ Button button1; int year1; int month1; int day1; int ShiftPattern; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); button1 = (Button)findViewById(R.id.openButton); button1.setText("open"); button1.setOnClickListener(this); Bundle extras = getIntent().getExtras(); year1 = extras.getInt("year1"); day1 = extras.getInt("day1"); month1 = extras.getInt("month1"); ShiftPattern = extras.getInt("ShiftPattern"); } public void onClick(View v){ private static void addToCalendar(Context ICSCalendarActivity, final String title, final long dtstart, final long dtend) { final ContentResolver cr = ICSCalendarActivity.getContentResolver(); Cursor cursor ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cursor = cr.query(Uri.parse("content://com.android.calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); else cursor = cr.query(Uri.parse("content://calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); if ( cursor.moveToFirst() ) { final String[] calNames = new String[cursor.getCount()]; final int[] calIds = new int[cursor.getCount()]; for (int i = 0; i < calNames.length; i++) { calIds[i] = cursor.getInt(0); calNames[i] = cursor.getString(1); cursor.moveToNext(); } AlertDialog.Builder builder = new AlertDialog.Builder(ICSCalendarActivity); builder.setSingleChoiceItems(calNames, -1, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ContentValues cv = new ContentValues(); cv.put("calendar_id", calIds[which]); cv.put("title", title); cv.put("dtstart", dtstart ); cv.put("hasAlarm", 1); cv.put("dtend", dtend); Uri newEvent ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) newEvent = cr.insert(Uri.parse("content://com.android.calendar/events"), cv); else newEvent = cr.insert(Uri.parse("content://calendar/events"), cv); if (newEvent != null) { long id = Long.parseLong( newEvent.getLastPathSegment() ); ContentValues values = new ContentValues(); values.put( "event_id", id ); values.put( "method", 1 ); values.put( "minutes", 15 ); // 15 minutes if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cr.insert( Uri.parse( "content://com.android.calendar/reminders" ), values ); else cr.insert( Uri.parse( "content://calendar/reminders" ), values ); } dialog.cancel(); } }); builder.create().show(); } cursor.close(); } } Thank you.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Cleaner HTML Markup with ASP.NET 4 Web Forms - Client IDs (VS 2010 and .NET 4.0 Series)

    - by ScottGu
    This is the sixteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post is the first of a few blog posts I’ll be doing that talk about some of the important changes we’ve made to make Web Forms in ASP.NET 4 generate clean, standards-compliant, CSS-friendly markup.  Today I’ll cover the work we are doing to provide better control over the “ID” attributes rendered by server controls to the client. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Clean, Standards-Based, CSS-Friendly Markup One of the common complaints developers have often had with ASP.NET Web Forms is that when using server controls they don’t have the ability to easily generate clean, CSS-friendly output and markup.  Some of the specific complaints with previous ASP.NET releases include: Auto-generated ID attributes within HTML make it hard to write JavaScript and style with CSS Use of tables instead of semantic markup for certain controls (in particular the asp:menu control) make styling ugly Some controls render inline style properties even if no style property on the control has been set ViewState can often be bigger than ideal ASP.NET 4 provides better support for building standards-compliant pages out of the box.  The built-in <asp:> server controls with ASP.NET 4 now generate cleaner markup and support CSS styling – and help address all of the above issues.  Markup Compatibility When Upgrading Existing ASP.NET Web Forms Applications A common question people often ask when hearing about the cleaner markup coming with ASP.NET 4 is “Great - but what about my existing applications?  Will these changes/improvements break things when I upgrade?” To help ensure that we don’t break assumptions around markup and styling with existing ASP.NET Web Forms applications, we’ve enabled a configuration flag – controlRenderingCompatbilityVersion – within web.config that let’s you decide if you want to use the new cleaner markup approach that is the default with new ASP.NET 4 applications, or for compatibility reasons render the same markup that previous versions of ASP.NET used:   When the controlRenderingCompatbilityVersion flag is set to “3.5” your application and server controls will by default render output using the same markup generation used with VS 2008 and .NET 3.5.  When the controlRenderingCompatbilityVersion flag is set to “4.0” your application and server controls will strictly adhere to the XHTML 1.1 specification, have cleaner client IDs, render with semantic correctness in mind, and have extraneous inline styles removed. This flag defaults to 4.0 for all new ASP.NET Web Forms applications built using ASP.NET 4. Any previous application that is upgraded using VS 2010 will have the controlRenderingCompatbilityVersion flag automatically set to 3.5 by the upgrade wizard to ensure backwards compatibility.  You can then optionally change it (either at the application level, or scope it within the web.config file to be on a per page or directory level) if you move your pages to use CSS and take advantage of the new markup rendering. Today’s Cleaner Markup Topic: Client IDs The ability to have clean, predictable, ID attributes on rendered HTML elements is something developers have long asked for with Web Forms (ID values like “ctl00_ContentPlaceholder1_ListView1_ctrl0_Label1” are not very popular).  Having control over the ID values rendered helps make it much easier to write client-side JavaScript against the output, makes it easier to style elements using CSS, and on large pages can help reduce the overall size of the markup generated. New ClientIDMode Property on Controls ASP.NET 4 supports a new ClientIDMode property on the Control base class.  The ClientIDMode property indicates how controls should generate client ID values when they render.  The ClientIDMode property supports four possible values: AutoID—Renders the output as in .NET 3.5 (auto-generated IDs which will still render prefixes like ctrl00 for compatibility) Predictable (Default)— Trims any “ctl00” ID string and if a list/container control concatenates child ids (example: id=”ParentControl_ChildControl”) Static—Hands over full ID naming control to the developer – whatever they set as the ID of the control is what is rendered (example: id=”JustMyId”) Inherit—Tells the control to defer to the naming behavior mode of the parent container control The ClientIDMode property can be set directly on individual controls (or within container controls – in which case the controls within them will by default inherit the setting): Or it can be specified at a page or usercontrol level (using the <%@ Page %> or <%@ Control %> directives) – in which case controls within the pages/usercontrols inherit the setting (and can optionally override it): Or it can be set within the web.config file of an application – in which case pages within the application inherit the setting (and can optionally override it): This gives you the flexibility to customize/override the naming behavior however you want. Example: Using the ClientIDMode property to control the IDs of Non-List Controls Let’s take a look at how we can use the new ClientIDMode property to control the rendering of “ID” elements within a page.  To help illustrate this we can create a simple page called “SingleControlExample.aspx” that is based on a master-page called “Site.Master”, and which has a single <asp:label> control with an ID of “Message” that is contained with an <asp:content> container control called “MainContent”: Within our code-behind we’ll then add some simple code like below to dynamically populate the Label’s Text property at runtime:   If we were running this application using ASP.NET 3.5 (or had our ASP.NET 4 application configured to run using 3.5 rendering or ClientIDMode=AutoID), then the generated markup sent down to the client would look like below: This ID is unique (which is good) – but rather ugly because of the “ct100” prefix (which is bad). Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Predictable” With ASP.NET 4, server controls by default now render their ID’s using ClientIDMode=”Predictable”.  This helps ensure that ID values are still unique and don’t conflict on a page, but at the same time it makes the IDs less verbose and more predictable.  This means that the generated markup of our <asp:label> control above will by default now look like below with ASP.NET 4: Notice that the “ct100” prefix is gone. Because the “Message” control is embedded within a “MainContent” container control, by default it’s ID will be prefixed “MainContent_Message” to avoid potential collisions with other controls elsewhere within the page. Markup Rendering when using ASP.NET 4 and the ClientIDMode is set to “Static” Sometimes you don’t want your ID values to be nested hierarchically, though, and instead just want the ID rendered to be whatever value you set it as.  To enable this you can now use ClientIDMode=static, in which case the ID rendered will be exactly the same as what you set it on the server-side on your control.  This will cause the below markup to be rendered with ASP.NET 4: This option now gives you the ability to completely control the client ID values sent down by controls. Example: Using the ClientIDMode property to control the IDs of Data-Bound List Controls Data-bound list/grid controls have historically been the hardest to use/style when it comes to working with Web Form’s automatically generated IDs.  Let’s now take a look at a scenario where we’ll customize the ID’s rendered using a ListView control with ASP.NET 4. The code snippet below is an example of a ListView control that displays the contents of a data-bound collection — in this case, airports: We can then write code like below within our code-behind to dynamically databind a list of airports to the ListView above: At runtime this will then by default generate a <ul> list of airports like below.  Note that because the <ul> and <li> elements in the ListView’s template are not server controls, no IDs are rendered in our markup: Adding Client ID’s to Each Row Item Now, let’s say that we wanted to add client-ID’s to the output so that we can programmatically access each <li> via JavaScript.  We want these ID’s to be unique, predictable, and identifiable. A first approach would be to mark each <li> element within the template as being a server control (by giving it a runat=server attribute) and by giving each one an id of “airport”: By default ASP.NET 4 will now render clean IDs like below (no ctl001-like ids are rendered):   Using the ClientIDRowSuffix Property Our template above now generates unique ID’s for each <li> element – but if we are going to access them programmatically on the client using JavaScript we might want to instead have the ID’s contain the airport code within them to make them easier to reference.  The good news is that we can easily do this by taking advantage of the new ClientIDRowSuffix property on databound controls in ASP.NET 4 to better control the ID’s of our individual row elements. To do this, we’ll set the ClientIDRowSuffix property to “Code” on our ListView control.  This tells the ListView to use the databound “Code” property from our Airport class when generating the ID: And now instead of having row suffixes like “1”, “2”, and “3”, we’ll instead have the Airport.Code value embedded within the IDs (e.g: _CLE, _CAK, _PDX, etc): You can use this ClientIDRowSuffix approach with other databound controls like the GridView as well. It is useful anytime you want to program row elements on the client – and use clean/identified IDs to easily reference them from JavaScript code. Summary ASP.NET 4 enables you to generate much cleaner HTML markup from server controls and from within your Web Forms applications.  In today’s post I covered how you can now easily control the client ID values that are rendered by server controls.  In upcoming posts I’ll cover some of the other markup improvements that are also coming with the ASP.NET 4 release. Hope this helps, Scott

    Read the article

  • Gone With the Wind?

    - by antony.reynolds
    Where Have All the Composites Gone? I was just asked to help out with an interesting problem at a customer.  All their composites had disappeared from the EM console, none of them showed as loading in the log files and there was an ominous error message in the logs. Symptoms After a server restart the customer noticed that none of his composites were available, they didn’t show in the EM console and in the log files they saw this error message: SEVERE: WLSFabricKernelInitializer.getCompositeList Error during parsing and processing of deployed-composites.xml file This indicates some sort of problem when parsing the deployed-composites.xml file.  This is very bad because the deployed-composites.xml file is basically the table of contents that tells SOA Infrastructure what composites to load and where to find them in MDS.  If you can’t read this file you can’t load any composites and your SOA Server now has all the utility of a chocolate teapot. Verification We can look at the deployed-composites.xml file from MDS either by connecting JDeveloper to MDS, exporting the file using WLST or exporting the whole soa-infra MDS partition by using EM->SOA->soa-infra->Administration->MDS Configuration.  Exporting via EM is probably the easiest because it then prepares you to fix the problem later.  After exporting the partition to local storage on the SOA Server I then ran an XSLT transform across the file deployed-composites/deployed-composites.xml. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">     <xsl:output indent="yes"/>     <xsl:template match="/">         <testResult>             <composite-series>                 <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series)"/></xsl:attribute>                 <xsl:attribute name="nameAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@name])"/></xsl:attribute>                 <xsl:attribute name="defaultAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@default])"/></xsl:attribute>                 <composite-revision>                     <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision)"/></xsl:attribute>                     <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@dn])"/></xsl:attribute>                     <xsl:attribute name="stateAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@state])"/></xsl:attribute>                     <xsl:attribute name="modeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@mode])"/></xsl:attribute>                     <xsl:attribute name="locationAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@location])"/></xsl:attribute>                     <composite>                         <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite)"/></xsl:attribute>                         <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@dn])"/></xsl:attribute>                         <xsl:attribute name="deployedTimeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@deployedTime])"/></xsl:attribute>                     </composite>                 </composite-revision>                 <xsl:apply-templates select="deployed-composites/composite-series"/>             </composite-series>         </testResult>     </xsl:template>     <xsl:template match="composite-series">             <xsl:if test="not(@name) or not(@default) or composite-revision[not(@dn) or not(@state) or not(@mode) or not(@location)]">                 <ErrorNode>                     <xsl:attribute name="elementPos"><xsl:value-of select="position()"/></xsl:attribute>                     <xsl:copy-of select="."/>                 </ErrorNode>             </xsl:if>     </xsl:template> </xsl:stylesheet> The output from this is not pretty but it shows any <composite-series> tags that are missing expected attributes (name and default).  It also shows how many composites are in the file (111) and how many revisions of those composites (115). <?xml version="1.0" encoding="UTF-8"?> <testResult xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">    <composite-series elementCount="111" nameAttributeCount="110" defaultAttributeCount="110">       <composite-revision elementCount="115" dnAttributeCount="114" stateAttributeCount="115"                           modeAttributeCount="115"                           locationAttributeCount="114">          <composite elementCount="115" dnAttributeCount="114" deployedTimeAttributeCount="115"/>       </composite-revision>       <ErrorNode elementPos="82">          <composite-series xmlns="">             <composite-revision state="on" mode="active">                <composite deployedTime="2010-12-15T11:50:16.067+01:00"/>             </composite-revision>          </composite-series>       </ErrorNode>    </composite-series> </testResult> From this I could see that one of the <composite-series> elements (number 82 of 111) seemed to be corrupt. Having found the problem I now needed to fix it. Fixing the Problem The solution was really quite easy.  First for safeties sake I took a backup of the exported MDS partition.  I then edited the deployed-composites/deployed-composites.xml file to remove the offending <composite-series> tag. Finally I restarted the SOA domain and was rewarded by seeing that the deployed composites were now visible. Summary One possible cause of not being able to see deployed composites after a SOA 11g system restart is a corrupt deployed-composites.xml file.  Retrieving this file from MDS, repairing it, and replacing it back into MDS can solve the problem.  This still leaves the problem of how did this file become corrupt!

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

  • CodePlex Daily Summary for Friday, June 04, 2010

    CodePlex Daily Summary for Friday, June 04, 2010New Projects23 Umbraco addons: 23 Umbraco addonsAdd-ons for EPiServer Relate+: In the Add-ons for EPiServer Relate+ you will find add-ons, extensions and modules that work together with EPiServer Relate+.Advanced Mail Merge (AMM) for Microsoft Office: Advanced Mail Merge for Microsoft Word 2007/2010, offers great extensable functionality: - Merge to document (PDF) - Merge to attachment - Use Out...Cenobith RLS Sample: Simple implementation of Row Level Security for Microsoft SQL ServerCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these dat...DigitArchive: Digit Archive makes it easy for the DIGIT magazine readers to find the correct software or movie bundled in the media along with the magazine. You'...dNet.DB: dNetDB is a .net framework that simplifies model and data access by providing a database independent object-based persistence, where objects are pe...Dynamic Application Framework: The Dynamic Application Framework provides a highly flexible environment for creating applications. Multiple UI and Execution Environments, along w...ECoG: ECoG toolkitFB Toolkit with Contracts: This is a research project where I have inserted code contracts into the Facebook Toolkit source code., version 3.1 beta. This delivers an efficien...GeneCMS: GeneCMS allows users to generate static HTML based websites by offering an ASP.NET editing front-end that can be run in the local machine. It is ta...HooIzDat: HooIzDat is game that asks, who the heck is that?! It's a two player game where your task is to guess your opponent's person before he or she guess...JingQiao.Interacting: JingQiao Interacting MessagingKanbanBoard: Visual task board for Kanban and Scrum.Learning CSharp: Just Learning CSharpMammoth: mammothMapWindow Mobile: MapWindow Mobile is mobile GIS Software which can run on windows mobile, developed in C# .NET Compact Framework. It provides basic GIS functionalit...Mindless Setback: Setback is a card game popular in New England. This project uses a combination of brute force and Monte Carlo methods to play Setback. This is an e...MSNCore(DirectUI) Element Viewer: MSNCore Element Viewer is an application designed to enumerate the elements with in applications built with MSNCore.dll and UXCore.dll. This appli...MSVN Team: bài tập thầy lườngNugget: Web Socket Server: A web socket server implemented in c#. The goal of the projects is to create an easy way to start using HTML5 web sockets in .NET web applications.oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker is an user control that can be used instead of the ColorDialog when you want to allow your users to select a color in a windows f...Prism Software Factory: The Prism Software Factory is a software factory for Visual Studio 2010 assisting developers in the process of building WPF & Silverlight applicati...Project Lion: Project lion is forum developed in Silverlight technology. Refix - .NET dependency management: Refix is an attempt to solve the problem of binary dependency management in large .NET solutions. It will achieve the goal using (amongst other thi...Rich Task List: Rich Task List is a tutorial project for DotNetNuke Module Development.SharePoint PowerRSS: Easy/Clean way to get SharePoint list data via more standard RSS feed. I found CleanRSS.aspx as part of SPRSS: Enhanced RSS Functionality for WSS ...SOAPI - StackOverflow API Generator: Generates, directly from the self documenting StackOverflow API specification, an end-to-end, fully documented API wrapper library with Visual Stu...SQL Script Application Utility: This C# project allows you to apply scripts to a database for table creation, data creation, etc. You can keep DDL in separate SQL scripts which c...Sql Server Reports Viewer: Sql Server Reports Viewer makes it easier to render Sql Server Reports without the need to setup a SSRS Server. This makes deployments a breeze. ...StorageHD: StorageHD system for large video filesUrzaGatherer: UrzaGatherer is a WPF 4.0 client application to handle Magic The Gathering cards collections. You can manage expansions, blocks and all informatio...webrel: This tool executes simple relational algebra expressions. It is useful for learning of Database course. Javascript and xhtml is used to develop thi...World Wide Grab: World Wide Grab allows retrieval and integration of various semi-structured data sorces, expecially Web applications. It turns every available res...New Releases3FD - Framework For Fast Development (C++): Alpha 3: This release was compiled in Visual Studio Release mode. It means you can use it in whatever compiler you want. However, the compatibility with ano...Advanced Mail Merge (AMM) for Microsoft Office: Advanced MailMerge 2007.zip: Release 1.1.0.0Army Bodger: Bodger 3 Archetype Test: Ok so it's later and I've largely finished it. Right now the Space Wolves have their Troops written and one HQ unit. The equipment panel largely wo...AwesomiumDotNet: AwesomiumDotNet 1.6 beta: Preview of AwesomiumDotNet 1.6.Bojinx: Bojinx Core V4.6: New features in this release: Greatly improved logging for INFO and DEBUG. Improved the getClassName function in ObjectUtils. Added the ability ...Cenobith RLS Sample: Sample App: Change connection strings in App.config and Web.config files.Christoc's DotNetNuke C# Module Development Template: 00.00.02: A minor update from the original release with a few fixes including Localization and some updated documentation.Community Forums NNTP bridge: Community Forums NNTP Bridge V25: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DEWD: DEWD for Umbraco v1.0: Beta release of the package. Functional feature set and fairly stable. Since the alpha: Validation on input fields Custom view controls Ability...DotNetNuke Developers Help File: DNNHelpSystem 05.04.02: Release of the developer core API help documentation of DotNetNuke in MSDN style format, both as .CHM stand alone file as well as a html website ba...Drive Backup: Drive Backup V.0604: This release includes the following fixes/features: * Fixed incompatibility with some USB drives (those marked as “fixed” by Windows) * Ad...Event Scavenger: Version 3.3 (Refresh): Archiving bit added to database plus archiving stored procedure updated. Rest of items just refreshed. Database set to version 3.3Expression Encoder Batch Processor: Expression Batch v0.3: Now set the newly-converted file's Created DateTime to equal the source file's. This helps keep your videos sorterd chronologically in media librar...Folder Bookmarks: Folder Bookmarks 1.6.1: The latest version of Folder Bookmarks (1.6.1), with Mini-Menu bug fixes and 'Help' feature - all the instructions needed to use the software (If y...Genesis Smart Client Framework: Genesis v2.0 - Ruby User Experience Platform (UXP): This is the start of the rewrite of the entire framework. The rewrite will include support for XAML through WPF and Silverlight, WCF, Workflow Serv...Global: http requester tool: Added a brnad new console app for making http requests.GMap.NET - Great Maps for Windows Forms & Presentation: Hot Build: this is latest change-set build, unstable previewHERB.IQ: Alpha 0.1 Source code release 4: As of 6-23-10 @ 9:48ESTInfragistics Analytics Framework: Infragistics Analytics Framework 1.0: This project includes wrappers for the Infragistics controls that integrate with the recently launched Microsoft Silverlight Analytics Framework. T...Innovative Games: Cube Mapper: Cube Mapper is a small tool that takes in six textures and outputs a cube map that is a combination of the six textures. Cube Mapper supports .tga...jQuery Library for SharePoint Web Services: SPServices 0.5.6: This release is in an alpha state. Please only download it if you know what you are getting and are willing to test it. In any case, it's a bad ide...linq to jquery: jlinq v1.00 no doc: First public version of jlinq! no doc yet, soon too come!LinqSpecs: Version 1.0.1: Fabio Maulo has sent several patchs in order to make LinqSpecs to work with any linq provider other than in memory. Big KUDOS for him.mojoPortal: 2.3.4.4: see release notes on mojoportal.com Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on ...Nugget: Web Socket Server: Initial POC release: The initial proof of concept release. To try it out, open the Sample App.sln, set the ChatServer project as the start-up project, start debugging ...oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker Control for VS 2010 Beta 1: Beta 1Refix - .NET dependency management: Refix v0.1.0.48 ALPHA: First preview version of Refix command-line tool.SharePoint 2010 CSV Bulk Term Set Importer: Bulk Term Set Importer: Initial ReleaseSOAPI - StackOverflow API Generator: SOAPI Wrappers: SOAPI-JS First release as SOAPI-JS, SOAPI-CS coming shortly. Tests and example includedSQL Compact Toolbox: Beta 0.8.1: Initial test release - mind the bumps. Requires Visual Studio 2010.Thumb nail creator and image resizer: ThumbnailCreator1.2: this release fixes and issue that was occuring when the control was used inside paged dataTS3QueryLib.Net: TS3QueryLib.Net Version 0.23.17.0: Changelog Added Properties "IsSpacer" and "SpacerInfo" to ChannelListEntry. "IsSpacer" allows you to check whether the channel is a spacer channel ...UI Accessibility Checker: UI Accessibility Checker v.2.0: We are excited to announce the release of AccChecker 2.0! In addition to numerous bug fixes and usability improvements, these major features have...webrel: webrel 1.0: webrel 1.0WindStyle SlugHelper for Windows Live Writer: 1.2.0.0: 增加:可以配置是否忽略已经包含slug的日志,请在插件选项中配置; 增加:插件图标; 更新:支持最新Windows Live Writer,版本号14.0.8117.416。Work Recorder - Hold on own time!: WorkRecorder 1.1: +Only one instance can run #Change histogram to pie chartMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSBlogEngine.NETFarseer Physics EngineMain projectMirror Testing System

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • .htaccess file size causes 500 Internal Server Error

    - by moobot
    As soon as my .htaccess goes over approx 8410 bytes, I get a 500 Internal Server Error. I don't think this is due to a bad redirect, as I have experimented with redirects in the .htaccess and then with just text that is commented out #. (no actual commands in the .htaccess file) Is there anything obvious that can cause this? Update: The site is on WordPress. Here are the redirects I was originally trying to add: RewriteEngine On ## 301 Redirects of old URLs to new # 301 Redirect 1 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accesseries/underlay/prod_37\.html$ /product-category/accessories/underlays? [R=301,NE,NC,L] # 301 Redirect 2 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accessories/acoustic-underlay/prod_29\.html$ /product/acoustic-underlay/? [R=301,NE,NC,L] # 301 Redirect 3 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^accessories/cat_4\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 4 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/accessories/cat_8\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 5 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/natural-strandwoven-bamboo-semi-gloss-wide-board-135mm-click/prod_151\.html$ /product/natural-strand-woven-bamboo-semi-gloss-wide-board-135mm-click/? [R=301,NE,NC,L] # 301 Redirect 6 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strandwoven-chocolate-135mm-bamboo-flooring/prod_174\.html$ /product/strand-woven-chocolate-135mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 7 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strand-woven-kempas-bamboo-flooring/prod_173\.html$ /product/strand-woven-kempas-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 8 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/bamboo-floor/strandwoven-walnut-wired-135mm-bamboo-flooring/prod_176\.html$ /product/strand-woven-walnut-wired-135mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 9 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-flooring/cat_7\.html$ /product-category/bamboo-floor/? [R=301,NE,NC,L] # 301 Redirect 10 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-bamboo-installation/info_8\.html$ /bamboo-installation/? [R=301,NE,NC,L] # 301 Redirect 11 RewriteCond %{QUERY_STRING} ^act=cart$ [NC] RewriteRule ^cart\.php$ /cart/? [R=301,NE,NC,L] # 301 Redirect 12 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^contact-us/info_2\.html$ /contact-us/? [R=301,NE,NC,L] # 301 Redirect 13 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^faqs/info_9\.html$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 14 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/black-butt-engineered-floating-timber/prod_213\.html$ /product/black-butt-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 15 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/doussie-engineered-floating-timber/prod_208\.html$ /product/doussie-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 16 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-floating-timber-floor/smoked-oak-engineered-floating-timber/prod_217\.html$ /product/smoked-oak-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 17 RewriteCond %{QUERY_STRING} ^act=thanks$ [NC] RewriteRule ^index\.php$ http://www.xxxxxxxxxx.com/? [R=301,NE,NC,L] # 301 Redirect 18 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=13$ [NC] RewriteRule ^index\.php$ /product-category/samples/bamboo-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 19 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=18$ [NC] RewriteRule ^index\.php$ /product/bamboo-plastic-composite/? [R=301,NE,NC,L] # 301 Redirect 20 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=2$ [NC] RewriteRule ^index\.php$ /product-category/bamboo-floor/? [R=301,NE,NC,L] # 301 Redirect 21 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=20$ [NC] RewriteRule ^index\.php$ /products/? [R=301,NE,NC,L] # 301 Redirect 22 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=3$ [NC] RewriteRule ^index\.php$ /product-category/floating-timber-floor/? [R=301,NE,NC,L] # 301 Redirect 23 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=5$ [NC] RewriteRule ^index\.php$ /product-category/laminate-flooring/? [R=301,NE,NC,L] # 301 Redirect 24 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=6$ [NC] RewriteRule ^index\.php$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 25 RewriteCond %{QUERY_STRING} ^act=viewCat&catId=saleItems$ [NC] RewriteRule ^index\.php$ /product-category/clearance-sale/? [R=301,NE,NC,L] # 301 Redirect 26 RewriteCond %{QUERY_STRING} ^act=viewDoc&docId=3$ [NC] RewriteRule ^index\.php$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 27 RewriteCond %{QUERY_STRING} ^act=viewDoc&docId=4$ [NC] RewriteRule ^index\.php$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 28 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=137$ [NC] RewriteRule ^index\.php$ /product/laminate-flooring-goustein-wood/? [R=301,NE,NC,L] # 301 Redirect 29 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=164$ [NC] RewriteRule ^index\.php$ /product/modern-black-brushed-finish-strand-woven-flooring/? [R=301,NE,NC,L] # 301 Redirect 30 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=165$ [NC] RewriteRule ^index\.php$ /product/lime-wash-strand-woven-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 31 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=168$ [NC] RewriteRule ^index\.php$ /product/country-bark/? [R=301,NE,NC,L] # 301 Redirect 32 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=173$ [NC] RewriteRule ^index\.php$ /product-category/bamboo-floor/14mm-bamboo-flooring/? [R=301,NE,NC,L] # 301 Redirect 33 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=178$ [NC] RewriteRule ^index\.php$ /product/blue-gum-136-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 34 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=199$ [NC] RewriteRule ^index\.php$ /product/jarrah-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 35 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=205$ [NC] RewriteRule ^index\.php$ /product/elm-12mm-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 36 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=209$ [NC] RewriteRule ^index\.php$ /product/iroko-engineered-floating-timber/? [R=301,NE,NC,L] # 301 Redirect 37 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=222$ [NC] RewriteRule ^index\.php$ /product/european-oak-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 38 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=236$ [NC] RewriteRule ^index\.php$ /product/black-forest-5mm-vinyl-flooring/? [R=301,NE,NC,L] # 301 Redirect 39 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=65$ [NC] RewriteRule ^index\.php$ /product/stair-nose/? [R=301,NE,NC,L] # 301 Redirect 40 RewriteCond %{QUERY_STRING} ^act=viewProd&productId=83$ [NC] RewriteRule ^index\.php$ /product/laminate-flooring-warm-teak/? [R=301,NE,NC,L] # 301 Redirect 41 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/12mm-laminate-flooring/blackbutt/prod_156\.html$ /product/blackbutt-12mm-laminate-floor/? [R=301,NE,NC,L] # 301 Redirect 42 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/12mm-laminate-flooring/tasmanian-oak/prod_171\.html$ /product/tasmanian-oak/? [R=301,NE,NC,L] # 301 Redirect 43 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/8-3mm-laminate-flooring/laminate-flooring-warm-teak/prod_8\.html$ /product/laminate-flooring-warm-teak/? [R=301,NE,NC,L] # 301 Redirect 44 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/accessories/cat_6\.html$ /product-category/accessories/? [R=301,NE,NC,L] # 301 Redirect 45 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/cat_5\.html$ /product-category/laminate-flooring/? [R=301,NE,NC,L] # 301 Redirect 46 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-flooring/country-classic-12mm-laminate/cat_19\.html$ /product-category/laminate-flooring/12mm-country-classic-laminate-floor/? [R=301,NE,NC,L] # 301 Redirect 47 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-laminate-installation/info_7\.html$ /laminate-installation/? [R=301,NE,NC,L] # 301 Redirect 48 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^privacy-policy/info_4\.html$ /faqs/? [R=301,NE,NC,L] # 301 Redirect 49 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^-quotation-request/info_5\.html$ /quotation-request/? [R=301,NE,NC,L] # 301 Redirect 50 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^rainbow-flooring/cat_16\.html$ /product-category/rainbow-flooring/? [R=301,NE,NC,L] # 301 Redirect 51 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^rainbow-flooring/walnut-rainbow-flooring/prod_112\.html$ /product/walnut-rainbow-flooring/? [R=301,NE,NC,L] # 301 Redirect 52 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/kempas-laminate-floor-sample/prod_195\.html$ /product/kempas-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 53 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/spotted-gum-laminate-floor-sample/prod_196\.html$ /product/spotted-gum-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 54 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/12mm-laminate-floor-samples/tasmanian-oak-laminate-floor-sample/prod_197\.html$ /product/tasmanian-oak-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 55 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/bamboo-flooring-samples/cat_13\.html$ /product-category/samples/bamboo-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 56 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/bamboo-flooring-samples/rosewood-strandwoven-bamboo-floor-135mm-click-sample/prod_191\.html$ /product/rosewood-strand-woven-bamboo-floor-135mm-click-sample/? [R=301,NE,NC,L] # 301 Redirect 57 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/cat_9\.html$ /samples/? [R=301,NE,NC,L] # 301 Redirect 58 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/iroko-engineered-floating-timber-floor-sample/prod_223\.html$ /product/iroko-engineered-floating-timber-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 59 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/jarrah-engineered-floating-timber-sample/prod_224\.html$ /product/jarrah-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 60 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/merbau-engineered-floating-timber-sample/prod_226\.html$ /product/merbau-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 61 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/spotted-gum-engineered-floating-timber-sample/prod_228\.html$ /product/spotted-gum-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 62 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^samples/floating-timber-floor-samples/sydney-blue-gum-engineered-floating-timber-sample/prod_220\.html$ /product/sydney-blue-gum-engineered-floating-timber-sample/? [R=301,NE,NC,L] # 301 Redirect 63 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/-laminate-flooring/accessories/laminate-flooring-accessories-click-stairnose/prod_251\.html$ /product/stair-nose/? [R=301,NE,NC,L] # 301 Redirect 64 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/-laminate-flooring/country-classic-12mm-laminate/country-classic-polar-white/prod_243\.html$ /product/country-classic-polar-white/? [R=301,NE,NC,L] # 301 Redirect 65 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/12mm-laminate-floor-samples/country-classic-polar-white/prod_244\.html$ /product/country-classic-polar-white-sample/? [R=301,NE,NC,L] # 301 Redirect 66 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/12mm-laminate-floor-samples/rustic-oak-12mm-laminate-floor/prod_248\.html$ /product/rustic-oak-12mm-laminate-floor-sample/? [R=301,NE,NC,L] # 301 Redirect 67 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/samples/vinyl-flooring-samples/cat_25\.html$ /product-category/samples/vinyl-flooring-samples/? [R=301,NE,NC,L] # 301 Redirect 68 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^shop\.php/vinyl-flooring/cat_24\.html$ /product-category/vinyl-floor/? [R=301,NE,NC,L] # 301 Redirect 69 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^solardeck-tiles/cat_22\.html$ /product-category/solardeck-tiles/? [R=301,NE,NC,L] # 301 Redirect 70 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^solardeck-tiles/solardeck-tiles/prod_206\.html$ /product/solardeck-tiles/? [R=301,NE,NC,L] # 301 Redirect 71 RewriteCond %{QUERY_STRING} ^$ RewriteRule ^terms-conditions/info_3\.html$ /faqs/? [R=301,NE,NC,L] I'm getting errors like this in my log: Invalid command 'aminate-flooring/tasmanian-oak/prod_171\\.html$', perhaps misspelled or defined by a module not included in the server configuration, referer: http://www.xxxxxxxx.com/laminate-installation/ Invalid command ',NE,NC,L]', perhaps misspelled or defined by a module not included in the server configuration Invalid command ',L]#', perhaps misspelled or defined by a module not included in the server configuration

    Read the article

  • Part 2: Career development as a Software Developer without becoming a manager.

    - by albertpascual
    Seems like my previous post inspired by the work of Michael “Doc” Norton was a great success for the amount of emails I have received. Yet amazed how many people didn’t want to discuss their questions in the comments  sections. I would encourage people to be more public, still I would like to reply to all of you on this public media. I still welcome those emails. What I found out is that many people feels like me, they want to be developers and still be compensated for their experience without wanting to take a job as a manager. Their perfect day is a full day of coding and learning. Many believe their companies will never pay a manager’s salary to a developer no matter what. Most of you ask how to get the ball rolling. And is the later that I’m addressing here, the previous group, will never try. What companies understand developers value and where can I find them? This is a very difficult question to ask, I don’t have a list of those companies or departments, I have seen in my past signs in companies bending backwards to compensate, in more ways the monetary, a developer that is a good resource to them. Allowing the person to move out of the state and still let them work for the company from home is a sign that company goes by individual cases. Allowing them to go to conference that will not benefit the company is another big sign. Simple signs like flexible hours and letting some people work from home. To see those signs you need to be working in that company for awhile and look at the departments where the manager is taking care of their employees in individual cases. Look for the department where people get quiet extra perks, where some people in the department work from home or remotely. In my experience, but not always true, medium to big companies, are prompt to recognize good developers. Then again, some companies just don’t get it and is when you see many technical people managing developers. For all the people that email me stating that developers can also be very good managers, I do not disagree, I just think that a good developers loves writing code, when you remove that part the better salary isn’t enough to keep a developer happy. Burned out developers appreciate being promoted to managers. How do I know I work in a bad company? In my experience I have been a consultant and seen many companies, a few signs I have learned about companies that will not recognize good developers are: When the turn over is pretty high, when developers are moving out in a big rate, no rocket scientist needs to tap you in the shoulder. When the company is looking always to outsource their development resources. The product is not that interesting nor the company cares too much for their final result and support. Code sweat shops. You’ll know when you start working in one of those. Run for the hills! Where do I start? Disclaimer: I have only based this post on Michael “Doc” Norton, this is just my interpretation and ideas. First thing is to look at Michael “Doc” Norton presentation Take Control of Your Development Career http://docondev.blogspot.com/ That should be the first thing any developer should look and follow like it was a pattern. I would personally recommend to find some language or pattern you are interested with and learn it, learn something that will make you happy. Second, join a User Group and get involve in the community. There are hundreds of user groups, and I’m sure you’ll find one in your city or near you town. Code Camps are Developers Meet Ups are also good resources. Third, I would join a open source project you are interested or better yet, create a new open source project with the new technology that you have learn and get coding. Fourth, create a Twitter account and follow the people that talks about the technology you are interested on. If you follow this 4 steps above I think you’ll be on your way, after they are complete, when you release your Open Source project you can say that you accomplished the first steps. Now, do not expect anything to change in your career life, you are changing and should not expect anything in return, besides borrowing some time from sleeping and your family. Creating a good schedule may help you, I find wasted time in many places that I use. Flying for work is actually one of those that allows me to do my best work on a airplane, don’t need to borrow time from anywhere else. Making sure you always have a light, charged laptop is so important. Next steps following the Michael “Doc” Norton Pattern or my interpretation of. First, help run a user group or better yet, start a new user group. I’ll add, as well, go to one conference a year and free development events around your city; Code Camps, Geek Dinners, etc. There are many free events sponsored by different companies for developers to get to know their products, I highly recommend those as the way to get connected. Second, chose a mentor, this is a very hard thing to do I experienced, find an expert in the technology you are learning that has the time for you, it is difficult, I wish you best of luck. Third, learn another technology or pattern, open your horizons a little bit more. Why not, if you had fun previously, keep doing it. Fourth, get involved in forums to answer and ask questions, getting notice in public forums is rewarding for your ego after such a long journey. Final steps following the Michael “Doc” Norton Pattern Teach what you know, become humble on your knowledge, find as many opportunities to teach and to get involved with the community, bring all that to your day job. Mr. Norton talks about getting naked, expose yourself to others in your knowledge and what you do not know. You are never too important for small opportunities, yet don’t  be afraid to take anything big and learn from the experience. Anytime you have the opportunity to talk to somebody that has reach the point the community knows his or her name, means that you should learn from it. Take opportunities that won’t make you money, yet will make you happy. Sometimes you need to spend money and time. Register talks in Code Camps and Dev Meet Ups, those are free, also go to Conference, Development Summits and Geek Diners for example. One day, people will pay you to attend. When will all these pay off? I don’t know. I’m still in the path, there are a few things that during your journey you may get little acknowledgements that you are in the correct path. In my case I think those are the little signs that tells you about your journey. I got awarded the Microsoft Most Valuable Professional for ASP.NET in 2007, 2008, 2009 and 2010. I got selected to speak at the DevConnections in Las Vegas in 2010 and Orlando 2011. I do believe that I do have a long way to go, yet what I do makes me happy and I hope I can keep doing for years to come. Every year I can see an improvement on my code, and more frameworks and languages are under my belt, I learn to embrace them all as well as in my daily job, I have been able to work in a few projects beyond my department. I’m a learner and believer of the Michael “Doc” Norton pattern. Looking forward to learn more about it to be able to apply it better. In my short journey I now see my mistakes, I did a few things right, I have been listening the intelligent people and not being afraid to move along the technology changes. In my professional life, I have tried to avoid being placed in only one technology and product. I have always share my code and never confused anybody that wanted to take over any of my projects, I didn’t think anything I created as my own nor care too much when politics didn’t see my vision. I stayed flexible, ready and visible, yet humble. I keep my head just below the clouds, and avoided managers meetings. I credit my manager for my success, and I faulted publicly only myself for the failures. Hope this helps. Cheers, Al Follow me in Twitter  Read my previous post tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/09/part-2-career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • CodePlex Daily Summary for Monday, June 10, 2013

    CodePlex Daily Summary for Monday, June 10, 2013Popular ReleasesNexusCamera: NexusCamera: Nexus Camera is a control for Windows Phone 7 & 8, which can be used as a menu on the Camera. The idea in making this control when we use a camera nexus. Thanks for Nexus. Need Windows Phone Toolkit https://phone.codeplex.com/ View Sample Camera http://tctechcrunch2011.files.wordpress.com/2012/11/nexus4-camera.jpgVR Player: VR Player 0.3 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuZXMAK2: Version 2.7.5.4: - add hayes modem device (thanks to Eltaron) - add host joystick selection - fix joystick bits (swapped in previous version)SimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesXomega Framework: Xomega.Framework 1.4: Adding support for Visual Studio 2012 and .Net framework 4.5. Minor bug fixes and enhancements.sb0t v.5: sb0t 5.14: Stability fix in script engine. Avatar.exists property fixed in scripting. cb0t custom font protocol re-added and updated to support new Ares.ASP.NET MVC Forum: MVCForum v1.3.5: This is a bug release version, with a couple of small usability features and UI changes. All the small amount of bugs reported in v1.3 have been fixed, no upgrade needed just overwrite the files and everything should just work.Json.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Christoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.3 for VS2012: V2.3 - Release Date 6/5/2013 Items addressed in this 2.3 release Fixed bad namespace for BusinessController in one of the C# templates. Updated documentation in all templates. Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesPulse: Pulse 0.6.7.0: A number of small bug fixes to stabilize the previous Beta. Sorry about the never ending "New Version" bug!QlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...BlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b New ProjectsASP.NET MVC 4 and RequireJS: ASP.NET MVC 4 application with Areas and RequireJSBaseX - Base converter and calculator: Dealing with numbers of any base in .NET.C# Exercises: C# ExercisesClassfinder: ClassfinderCreative OS ALPHA: This is a OS!!!!CSS Exercises: CSS ExercisesCustom Workflow Action: Project showing how to create and use Custom Workflow Action for SharePoint Designer 2013.Devshed Tools: Provides easy to use and compile-time-support solution for various type of projects on the .NET framework. Currently Devshed.Web is in development.Envar Editor: Edit environment variables easily on windowsExcel To Sql: A simplified tool for importing Excel data into SQL.HTML Exercises: HTML ExercisesKnockout.js with ASP.NET MVC: This project implements a system which maps .NET ViewModels to javascript ViewModels for use with knockout.js, using Razor markup syntax.LogoBids: LOGO??????,ORM??OpenAccess ORMManagistics: Management Logistics Application (including: Warehouse, Sale, Purchase, ...)Matrix Switch Preset Utility: A small utility for managing the inputs and outputs from a matrix switch via RS-232. Developed in WPF (VB9) and running on the .Net3.5SP1 framework.MvcSystemsCommander: An ASP.NET C# MVC4 webapp to help systems administrators consolidate common systems administration tasksNewspaperAgent: My small projectOutlook Recovery Software - Efficiently Repair Damaged PST File: This project tells you the easiest way to recover PST file of Outlook. Complete information has been given here to help users.Pattern: Testprocedure: a new procedural programming framework based on .net, by using lambda expression, it can handle async io friendly and provide a full lock-free solutionSharePoint 2013 custom field samples: SharePoint 2013 custom field samples is a research project aims to provide samples for developing custom fields in SharePoint 2013.SharePoint 2013 List Forms: This small framework allows you to manage custom list forms using rendering templates and controls stored in a SharePoint library.The Coconut Cranium Decision Engine: The Coconut Cranium Decision Engine is a boolean decision engine using the most mind-bendingly worse way of working.TxtToSeq: Command line utility to convert Commodore SEQ files to TXT files and vice-versa.ultgw: ult gw

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >