Search Results

Search found 4067 results on 163 pages for 'dukes choice award'.

Page 144/163 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • Re: Help with Boost Grammar

    - by Decmac04
    I have redesigned and extended the grammar I asked about earlier as shown below: // BIFAnalyser.cpp : Defines the entry point for the console application. // // /*============================================================================= Copyright (c) Temitope Jos Onunkun 2010 http://www.dcs.kcl.ac.uk/pg/onun/ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) =============================================================================*/ //////////////////////////////////////////////////////////////////////////// // // // B Machine parser using the Boost "Grammar" and "Semantic Actions". // // // //////////////////////////////////////////////////////////////////////////// include include include include include include //////////////////////////////////////////////////////////////////////////// using namespace std; using namespace boost::spirit; //////////////////////////////////////////////////////////////////////////// // // Semantic Actions // //////////////////////////////////////////////////////////////////////////// // // namespace { //semantic action function on individual lexeme void do_noint(char const* start, char const* end) { string str(start, end); if (str != "NAT1") cout << "PUSH(" << str << ')' << endl; } //semantic action function on addition of lexemes void do_add(char const*, char const*) { cout << "ADD" << endl; // for(vector::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) // cout << *vi << " "; } //semantic action function on subtraction of lexemes void do_subt(char const*, char const*) { cout << "SUBTRACT" << endl; } //semantic action function on multiplication of lexemes void do_mult(char const*, char const*) { cout << "\nMULTIPLY" << endl; } //semantic action function on division of lexemes void do_div(char const*, char const*) { cout << "\nDIVIDE" << endl; } // // vector flowTable; //semantic action function on simple substitution void do_sSubst(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*:=()",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; // int counter = 0; // tracks token position for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } counter = dx.size(); // vector d_hat; //stores set of dependency pairs string dep; //pairs variables as string object // dependency.first = *tok.begin(); vector FV; for(int unsigned i=1; i < dx.size(); i++) { // if(!atoi(dx.at(i).c_str()) && (dx.at(i) !=" ")) { dependency.second = dx.at(i); dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); vector<string> row; row.push_back(dependency.first); //push x_hat into first column of each row for(unsigned int j=0; j<2; j++) { row.push_back(dependency.second);//push an element (column) into the row } flowTable.push_back(row); //Add the row to the main vector } } //displays internal representation of information flow table cout << "\n****************\nDependency Table\n****************\n"; cout << "X_Hat\tDx\tG_Hat\n"; cout << "-----------------------------\n"; for(unsigned int i=0; i < flowTable.size(); i++) { for(unsigned int j=0; j<2; j++) { cout << flowTable[i][j] << "\t "; } if (*tok.begin() != "WHILE" ) //if there are no global flows, cout << "\t{}"; //display empty set cout << "\n"; } cout << "***************\n\n"; for(int unsigned j=0; j < FV.size(); j++) { if(FV.at(j) != dependency.second) dep = dependency.first + "|-" + dependency.second + " "; d_hat.push_back(dep); } cout << "PUSH(" << str << ')' << endl; cout << "\n*******\nDependency pairs\n*******\n"; for(int unsigned i=0; i < d_hat.size(); i++) cout << d_hat.at(i) << "\n...\n"; cout << "\nSIMPLE SUBSTITUTION\n\n"; } //semantic action function on multiple substitution void do_mSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; //cout << "\nMULTIPLE SUBSTITUTION\n\n"; } //semantic action function on unbounded choice substitution void do_mChoice(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nUNBOUNDED CHOICE SUBSTITUTION\n\n"; } void do_logicExpr(char const* start, char const* end) { string str(start, end); //use boost tokenizer to break down tokens typedef boost::tokenizer Tokenizer; boost::char_separator sep(" -+/*=:()<",0,boost::drop_empty_tokens); // char separator definition Tokenizer tok(str, sep); Tokenizer::iterator tok_iter = tok.begin(); //pair dependency; //create a pair object for dependencies //create a vector object to store all tokens vector dx; for(tok.begin(); tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } for(unsigned int i=0; i cout << "PUSH(" << str << ')' << endl; cout << "\nPREDICATE\n\n"; } void do_predicate(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMULTIPLE PREDICATE\n\n"; } void do_ifSelectPre(char const* start, char const* end) { string str(start, end); //if cout << "PUSH(" << str << ')' << endl; cout << "\nPROTECTED SUBSTITUTION\n\n"; } //semantic action function on machine substitution void do_machSubst(char const* start, char const* end) { string str(start, end); cout << "PUSH(" << str << ')' << endl; cout << "\nMACHINE SUBSTITUTION\n\n"; } } //////////////////////////////////////////////////////////////////////////// // // Machine Substitution Grammar // //////////////////////////////////////////////////////////////////////////// // Simple substitution grammar parser with integer values removed struct Substitution : public grammar { template struct definition { definition(Substitution const& ) { machine_subst = ( (simple_subst) | (multi_subst) | (if_select_pre_subst) | (unbounded_choice) )[&do_machSubst] ; unbounded_choice = str_p("ANY") ide_list str_p("WHERE") predicate str_p("THEN") machine_subst str_p("END") ; if_select_pre_subst = ( ( str_p("IF") predicate str_p("THEN") machine_subst *( str_p("ELSIF") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END") ) | ( str_p("SELECT") predicate str_p("THEN") machine_subst *( str_p("WHEN") predicate machine_subst ) !( str_p("ELSE") machine_subst) str_p("END")) | ( str_p("PRE") predicate str_p("THEN") machine_subst str_p("END") ) )[&do_ifSelectPre] ; multi_subst = ( (machine_subst) *( ( str_p("||") (machine_subst) ) | ( str_p("[]") (machine_subst) ) ) ) [&do_mSubst] ; simple_subst = (identifier str_p(":=") arith_expr) [&do_sSubst] ; expression = predicate | arith_expr ; predicate = ( (logic_expr) *( ( ch_p('&') (logic_expr) ) | ( str_p("OR") (logic_expr) ) ) )[&do_predicate] ; logic_expr = ( identifier (str_p("<") arith_expr) | (str_p("<") arith_expr) | (str_p("/:") arith_expr) | (str_p("<:") arith_expr) | (str_p("/<:") arith_expr) | (str_p("<<:") arith_expr) | (str_p("/<<:") arith_expr) | (str_p("<=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) | (str_p("=") arith_expr) ) [&do_logicExpr] ; arith_expr = term *( ('+' term)[&do_add] | ('-' term)[&do_subt] ) ; term = factor ( ('' factor)[&do_mult] | ('/' factor)[&do_div] ) ; factor = lexeme_d[( identifier | +digit_p)[&do_noint]] | '(' expression ')' | ('+' factor) ; ide_list = identifier *( ch_p(',') identifier ) ; identifier = alpha_p +( alnum_p | ch_p('_') ) ; } rule machine_subst, unbounded_choice, if_select_pre_subst, multi_subst, simple_subst, expression, predicate, logic_expr, arith_expr, term, factor, ide_list, identifier; rule<ScannerT> const& start() const { return predicate; //return multi_subst; //return machine_subst; } }; }; //////////////////////////////////////////////////////////////////////////// // // Main program // //////////////////////////////////////////////////////////////////////////// int main() { cout << "*********************************\n\n"; cout << "\t\t...Machine Parser...\n\n"; cout << "*********************************\n\n"; // cout << "Type an expression...or [q or Q] to quit\n\n"; string str; int machineCount = 0; char strFilename[256]; //file name store as a string object do { cout << "Please enter a filename...or [q or Q] to quit:\n\n "; //prompt for file name to be input //char strFilename[256]; //file name store as a string object cin strFilename; if(*strFilename == 'q' || *strFilename == 'Q') //termination condition return 0; ifstream inFile(strFilename); // opens file object for reading //output file for truncated machine (operations only) if (inFile.fail()) cerr << "\nUnable to open file for reading.\n" << endl; inFile.unsetf(std::ios::skipws); Substitution elementary_subst; // Simple substitution parser object string next; while (inFile str) { getline(inFile, next); str += next; if (str.empty() || str[0] == 'q' || str[0] == 'Q') break; parse_info< info = parse(str.c_str(), elementary_subst !end_p, space_p); if (info.full) { cout << "\n-------------------------\n"; cout << "Parsing succeeded\n"; cout << "\n-------------------------\n"; } else { cout << "\n-------------------------\n"; cout << "Parsing failed\n"; cout << "stopped at: " << info.stop << "\"\n"; cout << "\n-------------------------\n"; } } } while ( (*strFilename != 'q' || *strFilename !='Q')); return 0; } However, I am experiencing the following unexpected behaviours on testing: The text files I used are: f1.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 . f2.txt, ... containing ...: debt:=(LoanRequest+outstandingLoan1)*20 || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb . f3.txt, ... containing ...: yy < (xx+7+ww) . f4.txt, ... containing ...: yy < (xx+7+ww) & yy : NAT . When I use multi_subst as start rule both files (f1 and f2) are parsed correctly; When I use machine_subst as start rule file f1 parse correctly, while file f2 fails, producing the error: “Parsing failed stopped at: || newDebt := loanammount-paidammount || price := purchasePrice + overhead + bb” When I use predicate as start symbol, file f3 parse correctly, but file f4 yields the error: “ “Parsing failed stopped at: & yy : NAT” Can anyone help with the grammar, please? It appears there are problems with the grammar that I have so far been unable to spot.

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • Record Screen Activity with CamStudio

    - by Asian Angel
    Sometimes a visual demonstration works much better than a list of instructions. If you need to make a demo video for family and/or friends then you might want to have a look at CamStudio. Using CamStudio To get properly set up you will need to install two different files (the main program followed by the codec). Once that is done you are ready to get started. When you start the program you will see a surprisingly small window. Notice the highlighted Record to text…it serves as a visual indicator for the video type selected for recording. Before you start creating a video it would be a good idea to look through some of the settings. The first one to look at is the region or area that you want to record. Next you will want to look through the video options since these will affect the quality and final size of your video files. The default setting for quality is 70…adjust that to the level that best suits your needs. Note: For our example we maxed out the various video settings for best quality. On our system Microsoft Video 1 was listed as the default compressor but as you can see there were other options available. You can configure the settings for the compressor you want to use if desired. Keep in mind that each compressor will have unique settings of their own, so if you change it, be certain to go back and check. We decided to use the CamStudio Lossless Codec for our example (it gave the best results while trying the software). Going back to the main window you can toggle back and forth between .avi and .swf output using the last button. Once you are satisfied with the settings click on the red record button to start. If you need to pause while recording or stop recording click on the system tray icon and select the appropriate command. When you are finished recording, you will be presented with the save file window. Browse for the desired save location and name your new file. Once you have saved the file the movie player window will automatically open so that you view your new video. Our sample video shown here is at 50% of original size so may look slightly “gritty”. The detail was much better at 100%. If you decide to record and save as .swf the process will be identical to recording in .avi format until the movie player window opens. At that time the conversion process from .avi to .swf will begin. When complete you will have a new flash video and html file that goes with it. Depending on which browser you have set as default, you may run into a small problem when the preview for your new .swf file tries to open. There is a small bug in the generated html file. You can use this work-around or… Just open the .swf file directly in your favorite browser. Conclusion CamStudio may not produce the highest quality videos, but it’s free and does a very nice job nonetheless. If you are working on a tight budget or only need to make an occasional video then CamStudio is a very sensible choice. Links Download CamStudio Stable Version & CamStudio Codec *Download links are approximately half-way down the page. Download CamStudio Stable Version & CamStudio Codec at SourceForge *Beta version also available here. Similar Articles Productive Geek Tips Get the Classic Style Network Activity Indicator Back in Windows 7How To Copy a DVD with VLC 1.0ALLCapture 3.0 [Review]Listen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer Radio TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • e-interview: SunSpace to WebCenter migration

    - by me
    I had the pleasure to do an e-interview with Ana Neves around the SunSpace to WebCenter migration project.  Below is the english version of the interview.  Enjoy   Peter, you joined Oracle in 2009 through the acquisition of Sun. Becoming a part of Oracle meant many changes. The internal collaboration platform was one of them, as per a post you wrote back in 2011. Sun had SunSpace. How would you describe SunSpace? SunSpace was the internal Community and Social Collaboration platform for the Sun's Global Sales and Services Organization. SunSpace served around 600 communities with a main focus around technology, products and services. SunSpace was a big success. Within 3 months of its launch SunSpace had over 20,000 users and it won the Atlassian "Not just another wiki" Award for the best use of Confluence (https://blogs.oracle.com/peterreiser/entry/goodbye_sunspace_hello_webcenter). What made SunSpace so special? 1. People centric versus  Web centric The main concept of SunSpace put the person in the middle of everything. All relevant information, resources  etc. where dynamically pushed to a person's  myProfile ( Facebook like interface) based on the person's interest and  needs.  2. Ease to use  SunSpace was really easy to use. We spent a lot of time on social interaction design to optimize the user experience.  Also we integrated some sophisticated technology to hide complexity from the user. As example - when a user added a document to SunSpace - we analyzed the content of the document and suggested related metadata and tags to the user based on a sophisticated algorithm which was integrated with the corporate taxonomy. Based on this metadata the document was automatically shared with the relevant communities.  3. Easy to find One of the main use cases for SunSpace was that  a user could quickly find the content and information they needed for their job.  The search implementation was based on:  optimized search engine algorithm using social value based ranking enhancements community facilitated search optimization  faceted search which recommended highly relevant  content like products, communities and experts 4. Social Adoption  - How to build vibrant communities You can deploy the coolest social technology but what if the users are not using it?   To drive user adoption we implemented two  complementary models: 4.1 Community Methodology  We developed a set of best practices on how to create, run and sustain communities including: community structure and types (e.g. Community of Practice, Community of Interest etc.) & tips and tricks on how to build a "vibrant " communities, Community Health check etc.  These best practices where constantly tuned and updated by the community of community drivers. 4.2. Social Value System To drive user adoption there is ONE key  question you  have to answer for each individual user: What's In It For Me (WIIFM) We developed a Social Value System called Community Equity which measures the social value flow between People, Content and Metadata. Based on this technology we added "Gamfication" techniques (although at that time this term did not exist ) to SunSpace to honor people for the active contribution and participation.  As example: All  social credentials a user earned trough active community participation where dynamically displayed on her/his myProfile. How would you describe WebCenter? Oracle WebCenter (@oraclewebcenter) is the Oracle's  user engagement platform for social business. It helps people work together more efficiently through contextual collaboration tools that optimize connections between people, information, and applications and ensures users have access to the right information in the context of the business process in which they are engaged. Oracle WebCenter can help your organization deliver contextual and targeted Web experiences to users and enable employees to access information and applications through intuitive portals, composite applications, and mash-ups. How does it compare to SunSpace in terms of functionality? Before I answer this question, I would like to point out some limitation we started to see with the current SunSpace implementation. Due to the massive growth of the user population (>20,000 users), we experienced  performance and scalability challenges with the current technology. Also at the time - Sun Internal Communications and SunIT planned to replace the entire Sun Intranet with SunSpace. We  kicked-off a project to evaluate the enterprise level technology which eventually would replace the good old static Intranet.  And then Oracle acquired Sun. We already had defined the functional requirements for the Intranet replacement with a Social Enterprise Stack and we just needed to evaluate the functional requirements against WebCenter   Below are the summary of this evaluation  MyProfile SunSpace WebCenter How WebCenter Works Home MyProfile: to access, click on your name at the top of any WebCenter page Your name, title, and reporting line are displayed.  Sub-tabs show your activity stream (Activities); people in your network (Connections); files you have uploaded (Documents); your contact information (Organization); and any personal information you wish to share (About).   Files MyFiles Allows you to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows you to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Network HomeMyConnections Home: displays the activity stream of individuals in your network.MyConnections: shows individuals in your network.  Click on a person's name to see their contact info and link to their profile. Status Updates MyProfle > Activties Add and displays  your recent activties and status updates. Watches Preferences > Subscriptions > Current Subscriptions Receive email notifications when  pages / spaces you watch are modified. Drafts N/A WebCenter does not support Drafts Settings Preferences: to access, click on 'Preferences' at the top of any WebCenter page Set your general preferences, as well as your WebCenter messaging, search and mail settings. MyCommunities MySpaces: to access, click on 'Spaces' at the top of any WebCenter page Displays MySpaces (communities you are a member of); and Recent Spaces (communities you have recently visited). Community SunSpace Webcenter How Webcenter Works Home Home Displays a community introduction and activity stream.  Members can add messages, links or documents via the Community Message Board. No Top Contributors widget. People Members Lists members of the community. The Mail All Members feature allows moderators and participants to send a message to all members of the community. Membership Management can be found under > Manage > Members News News Members can post and access latest community news and they can subscribe to news using an RSS reader Documents Documents Allows community members to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows participants to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Wiki Wiki Allows community members to create and update web pages with a WYSIWYG editor.  Note: WebCenter does not support macros or portlet embedding. Forum Forum Post community forum topics. Contribute to community forum conversations.  N/A Calendar Update and/or view the Community Calendar. N/A Analytics Displays detailed analytics data (views,downloads, unique users etc.) for Pages, Wiki, Documents, and Forum in a given community space. What is the adoption of WebCenter at Oracle? The entire Intranet serving around 100,000 users  is running on WebCenter Content.  For professional communities we use WebCenter Portal and Spaces. Currently we have around 6,000 community spaces with  around 40,000 members.  Does Oracle have any metrics to assess usage and impact of WebCenter? Can you give us some examples? Sure -  we have a lot of metrics   For the Intranet we use traditional metrics like pageviews, monthly unique visitors and unique visits.  For Communities we use the WebCenter Portal/Spaces analytics service which gives as a wealth of data. The key metrics we track are: Space traffic (PageViews, Unique Users) Wiki,Documents (views, downloads etc.) Forum (users, views, posts etc.) Registered members over time  Depending on the community we can filter/segment the metrics by User Properties e.g. Country, Organization, Job Role etc. What are you doing to improve usage and impact? 1. We  integrating the WebCenter social services/fabric into all  main business applications. As example The Fusion CRM deployment is seamless integrated with Oracle Social Network (OSN) and all conversation around an opportunity or customer engagement is  done in OSN (see youtube video). 2. We drive Social Best Practice trough a program called "Social Networking & Business Collaboration (SNBC) program" You worked both with WebCenter and SunSpace. Knowing what you know today, if you had the chance to choose between the two, which one would you choose? Why? That's a tricky question   In the early days of  the Social Enterprise implementation (we started SunSpace in 2006), we needed an agile and easy to deploy technology to keep up with the users requirements. Sometimes we pushed two releases per day  and we were in a permanent perpetual beta mode - SunSpace was perfect for that.  After the social implementation matured over time - community generated content became business critical and we saw a change in the  requirements from agile to stability, scalability and reliability  of the infrastructure.  WebCenter is the right choice for such an enterprise-level deployment.  You are a WebCenter Evangelist at Oracle. What do you do as part of that role? Our  role is to help position Oracle as one of the key thought leaders and solutions provider for Social Business. In addition we drive social innovation trough our Oracle Appslab  team. Is that a full time role? Yes  How many other Evangelists are there in Oracle? We are currently 5 people in the WebCenter evangelist team (@webcentervoices): Christian Finn (@cfinn) leads the team - Christian came from the Microsoft Sharepoint product management team and is a recognized expert in Social Business and Enterprise Collaboration. Noël Jaffré  (@noeljaffre) is our Web Experience Management (WEM) guru and came to Oracle via FatWire acquisition (now WebCenter Sites). Jake Kuramoto (@theapplab) is part of the Oracle AppsLab innovation  team - Jake is well known as  the driving force behind  http://theappslab.com  a blog around social and innovation.  Noel Portugal (@noelportugal) is a developer in the Oracle AppsLab innovation team - he is the inventor of OraTweet - Oracle's internal tweeting platform  Peter Reiser (@peterreiser) is  a Social Business guru and the inventor of SunSpace and Community Equity.  What area of the business do you and the rest of the Evangelists sit in? What area of the organisation is responsible for WebCenter? We are part of the WebCenter product management  organization.  Is WebCenter part of the Knowledge Management strategy? Oracle WebCenter is the Oracle's user engagement platform for social business. It brings together the most complete portfolio of portal, web experience management, content, social and collaboration technologies into a single product suite and is the product foundation of the Oracle Knowledge Management strategy.  I am aware Oracle also uses Beehive internally. How would you describe Beehive? Oracle Beehive provides an integrated set of communication and collaboration services built on a single scalable, secure, enterprise-class platform Beehive is  internally used for enterprise wide mail, calendar and real collaboration (Web conferencing) services.  Are Beehive and WebCenter connected? Historically Beehive and WebCenter Portal & Content had some overlap in functionally. (Hey - if  a company has an acquisition strategy to strengthen its product offering and accelerate  innovation, it's pretty normal that functional overlap exists  :- )) A key objective of the WebCenter strategy is  to combine all social and collaboration offerings under the WebCenter product family. That means that certain Beehive components  will be integrated into the overall WebCenter product offering.  Are there any other internal collaboration tools at Oracle? Which ones There here are two other main social tools which are widely used at Oracle  Oracle Connect was the first social tool the Oracle AppsLab team created in 2007 - see (Jake's blog post for details). It is still extensively used. ... and as a former Sun guy I like this quote from the blog post:  "Traffic to Connect peaked right after the Sun merger in 2010, when it served several hundred thousand pageviews each month; since then, traffic has subsided, but still averages tens of thousands of pageviews to several thousand users each month." Oratweet - Oracle internal microblogging platform has been used since June 2008 and it is still growing.  It's entirely written in Oracle Application Express (APEX) which is a rapid web application development tool for the Oracle database. Wanna try it out? Here you can download the code.  What is Oracle's strategy regarding (all these) collaboration tools? Pretty straight forward. The strategy is to seamless  integrate the WebCenter social & collaboration services into all Business Applications to help customers to socialize their enterprise. 

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 08, 2011

    CodePlex Daily Summary for Tuesday, November 08, 2011Popular ReleasesFacebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.DotSpatial: Release 821: Updated releaseGoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeMapWindow 4: MapWindow GIS v4.8.6 - Final release - 32Bit: This is the final release of MapWindow v4.8. It has 4.8.6 as version number. This version has been thoroughly tested. If you do get an exception send the exception to us. Don't forget to include your e-mail address. Use the forums at http://www.mapwindow.org/phorum/ for questions. Please consider donating a small portion of the money you have saved by having free GIS tools: http://www.mapwindow.org/pages/donate.php What’s New in 4.8.6 (Final release) · A few minor issues have been fixed Wha...Kinect Paint: Kinect Paint 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search does not work well in some situations Please if anyone finds bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...New ProjectsAddThis for DotNetNuke: A simple AddThis module (plugin) for DotNetNukeAgnisExchange: <AGNIS>as A Growable Network Infor;ation System is developped in <C#>Blogger auto poster: Do you want to have a links blog? Share your favorites links in Google+ and then allow BAP(Blogger auto poster) to read its JSON feed and post its daily summary at your Blogger weblog automatically. CarbonEmissions: CarbonEmissionsCredivale: CredivaleCRM 2011 TreeView for Lookup: CRM 2011 WebResource Utility for showing Self-Joined Lookup data in TreeView form.Dafuscator: Dafuscator is a database data obfuscation system that allows you to tactically obfuscate or delete data out of your production database while leaving most of the data intact.DigDesDevSchoolHomeSaleSystem: This is study project. Done by Russian students. Use ASP.NET MVC.Dynamic Data Extensions: Dynamic Data Extensions add new features to the current ASP.Net 4 Dynamic Data versionEchosoft NoCrash From Scratch: Nocrash From scratch, a operating system designed (but not promised to be) almost invincible from viruses and %20 Windows-like using DoubleTime emulation to make it act like Windows 2000EdiProj: Just a PHP and Asp.Net project.experteer: decision theory class projectFlan Project: Fast-paced Action-RPG (2D Scrolling)GameXNA_Master: XNA MASETR GD 2011 Gyrf: SecretJogo Da Velha em CSharp: Jogo da velha feito para o curso de c#.Jogo das cores: Projeto da aula de extensão de C#Kookaburra: Kookaburra is a framework based on Windows PowerShell 2.0 that can be used for automating SharePoint 2010 administration. It contains a menu, several functions and a well defined structure for adding PowerShell scriptlets.LaunchWithParams: LaunchWithParams is a Windows Explorer add-on that allows you to launch an application executable with specific command-line parameters without having to open a command prompt.MerchantTribe - Shopping Cart and Ecommerce Platform in C# MVC ASP.NET: MerchantTribe is a free, open-source ASP.NET MVC ecommerce platform in C#. For more than a decade, we've been building and selling commercial .NET shopping cart software. Now, we're releasing an open-source project based on our award winning BV Commerce software. Thousands of companies use our shopping cart include Pebble Beach Resorts, DataPipe, TastyKake, MathBlaster and Chesapeake Energy. We need as many people as possible using MerchantTribe for it to thrive. We need Merchants, De...NaiveAlgorithms: Naive C# implementation of basic, general purpose algorithms, with an emphasis on readability, maintainability and correctness. While asymptotic complexity of algorithms is preserved, there is no premature optimisation performed to improve performance. NeonMikaWebserver: NeonMikaWebserver is easy to set up on your .Net MF device, as long as it has an ethernet port. It's espacially created for the Netduino, but I think it's no big work to port it to other plattforms. It provides the following functionalities: -XML Responses ... Just create an Instance of XMLResponse, give it an URI to listen for and fill a Hashtable with your response keys and values... Voila ;) -File Response ... No XML Response fitting your URL? Probably you want to watch a file sa...Next Action: The Next Action web application is a GTD task manager implemented with using MVC3, jQuery, Entity Framework. Orchard Comments Refactoring: Refactoring Orchard's comments moduleoutsource: outsourceScutex: Scutex, pronounced (sec-u-techs), is a 100% managed .Net Framework licensing platform for your applications. Scutex is a flexible licensing system allowing multiple licensing schemes allowing you the most control over how you licensing your products. Unlike any other licensing system on the market, Scutex provides 4 distinct licensing schemes, allowing you to protect your products at different levels using completely different licensing schemes, key types and protection systems. Using Scut...sigem2: Proyecto sigem2SSDL: An easy way to call and manage stored procedures in .NET.Trabalho C# - Datilografia: Software de datilografia produzido para o curso de extensão em C# da UFSCar Sorocaba.triliporra: Triliporra Euro 2012User Interface Toolkit: Ever dreamed about having one framework to write UI independently of your targetted client ? Now your dream has come true !vimrc: my vimrc fileVisuospatial Web Browsing using Kinect: The Kinect Web Browser project explores how users might use the Kinect sensor as a natural interface to browse the visual web. It's developed in C#.VNManager: VNManager or Version Number Manager is a simple Windows command line application which will update your Microsoft Visual Studio AssemblyInfo.cs files AssemblyVersion and AssemblyFileVersion attributes based on rules defined as comments on the same lines.WeddingAssistant: My site will be a ‘gift wish’ site for couples getting married. It will also be a place where people can come and look at ideas for hen and stag nights. Users log in and create a list of items they wish to receive for their wedding or look for ideas for hen and stag nights. The site will allow users to compile a list of items they have found on the internet from specific listed shops and order them in preference. The compiled list is then sent out to friends and relatives who can create a...WMS.NET: thewms is warehouse manager system! For the logistics industry! The team learning project! Built on the mvc3.0 and wcf ! Using jquery js framework as front endWorkflowRobot: Workflow based automation software.

    Read the article

  • SQLAuthority Guest Post – Lessons from Life and Work by Srini Chandra (Author of 3 Lives, in search of bliss)

    - by pinaldave
    Work and life are confusing terms together. How can one consider work outside of life. Work should be part of life or are we considering ourselves dead when we are at work. I have often seen developers and DBA complaining and confused about their job, work and life. Complaining is easy and everyone can do. I have heard quite often expression – “I do not have any other option.” I requested Srini Chanda (renowned author of Amazon Best Seller 3 Lives, in search of bliss (Amazon | Flipkart) to write a guest post on this subject which developer can read and appreciate. Let us see Srini’s thoughts in his own words. Each of us who works in the technology industry carries an especially heavy burden nowadays. For, fate has placed in our hands an awesome power to shape our society and its consciousness. For that reason, we must pay more and more attention to issues of professionalism, social responsibility and ethics. Equally importantly, the responsibility lies in our hands to ensure that we view our work and career as an opportunity to enlighten and lift ourselves up. Story: A Prisoner, 20 years and a Wheel Many years ago, I heard this story from a professor when I was a student at Carnegie Mellon. A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At times, when he was tired or puzzled and stopped turning the wheel, he would be flogged with a whip. The man did not know anything about the wheel other than that it was placed outside his jail somewhere. He wondered if the wheel crushed corn or if it ground wheat or something similar. He wondered if turning the wheel was useful to anyone. At the end of his jail term, he rushed out to see what the wheel was doing. To his disappointment, he found that the wheel was not connected to anything. All these years, he had been toiling for nothing. He gave a loud, frustrated shout and dropped dead. How many of us are turning wheels wondering what it is connected to? How many of us have unstated, uncaring attitudes towards our careers? How many of us view work as drudgery, as no more than a way to earn that next paycheck? How many of us have wondered about the spiritually uplifting aspect of work? Can a workforce that views work as merely a chore, be ethical? Can it produce truly life enhancing technology? Can it make positive contributions to the quality of life of a society? I think not. Thanks to Pinal and you, his readers, for giving me this opportunity to share my thoughts in a series of guest posts. I’d like to present a few ways over the next few weeks, in which we can tap into the liberating potential of work and make our lives better in the process. Now, please allow me to tell you another version of the story that the good professor shared with us in the classroom that day. Story: A Prisoner, 20 years, a Wheel and the LIFE A man was sentenced to 20 years in prison. During his time in prison, he was asked to turn a wheel every day. So, every day he turned the wheel. At first, his whole body and mind rebelled against his predicament. So, his limbs grew weary and his mind became numb and confused. And then, his self-awareness began to grow. He began to wonder how he came to be in the prison in the first place. He looked around and saw all his fellow prisoners also turning the wheel. His wife, his parents, his friends and his children – they were all in the prison too, and turning their own wheels! He began to wonder how this came about. As he wondered more and more, he began to focus less on his physical drudgery and boredom. And he began to clearly see his inner spirit which guided him in ways that allowed him to see the world with a universal view. His inner spirit guided him towards the source of eternal wisdom and happiness. He began to see the source of happiness in everything around him – his prison bound relationships, even his jailers and in his wheel. He became a source of light to those around him. His wheel jokes and humor infected them with joy and happiness. Finally, the day came for his release from jail. He walked calmly outside the jail and laughed aloud when he saw that the wheel was not connected to anything. He knelt down, kissed it and thanked it for the wisdom it taught him. Life is the prison. The wheel is your work. Both are sacred. Both have enormous powers to teach us wisdom and bring us happiness. Whether we allow them to do so, is a choice we have to make. Over the next few weeks, I hope to share with you a few lessons that I have learnt at the wheel in my two decades of my career (prison). Thank you for reading, and do let me know what you think. Reference: Srini Chandra (3 Lives, in search of bliss), Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • CodePlex Daily Summary for Saturday, November 05, 2011

    CodePlex Daily Summary for Saturday, November 05, 2011Popular ReleasesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Toolbox: Kinect Toolbox v1.1.0.2: This version adds support for the Kinect for Windows SDK beta 2.MapWindow 4: MapWindow GIS v4.8.6 - Final release - 32Bit: This is the final release of MapWindow v4.8. It has 4.8.6 as version number. This version has been thoroughly tested. If you do get an exception send the exception to us. Don't forget to include your e-mail address. Use the forums at http://www.mapwindow.org/phorum/ for questions. Please consider donating a small portion of the money you have saved by having free GIS tools: http://www.mapwindow.org/pages/donate.php What’s New in 4.8.6 (Final release) · A few minor issues have been fixed Wha...Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...FlagConsole: 1.0.1: BUGFIXES: - Fixed a bug, which caused the label not to draw a word, if it had the same length as the label's length.Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details. Webdeploy packages sha1 checksum: e6bb913e591543ab292a753d1a16cdb779488c10?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 3 Release: Beta 3 Refresh Dont forget to backup your config files BEFORE upgrading! The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, th...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseBoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.AssaultCube Reloaded: Release 2.3: THE RELEASE YOU'VE ALL BEEN WAITING FOR! IT CAN NOW BE CONSIDERED STABLE Linux has Debian 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. The server pack is ready for both Windows and Linux, but you might need to compile your own for linux (source included) If you are using Windows and require the source code, download the source package!A Microblog API (SINA weibo.com open API in C#, .Net???????API): AMicroblogAPI v1.0: AMicroBlogAPI is a C# implementation, a strong-typed deep encapsulation, a easy-to-use .net wrapper of SINA microblog API v1.0. App developers no longer need to parse various HTTP responses -- all responses are parsed into strong types; No longer need to construt the request query strings -- just simply gives the values of parameters; No longer need to implement the OAuth -- a single call could logs the user on. In this release (AMicroblogAPI v1.0), all basic data APIs are implemented. For ...New Projects:: Projeto Social Portal caridArte: Criação do Portal CaridArte.Async Executor: The Mantra "Never block the UI thread!" results in cumbersome helpers like the Backgroundworker with worker and callback delegates. There are other asynchronous patterns which are actually also all a bit cumbersome. The AsyncExecutor offers an alternative: It allows to switch between UI and worker thread in a single method! I.e. no cumbersome callback etc. This works similar to the async keyword available with .Net 4.5. Cake@Home: Cake@Home est un outils de gestion des commandes pour un service de restauration rapide. Cake@Home distribue des Gateaux aux chocolats sur la rue Colbert à Lille.Chainmail: avtex r&dCourier Contrib: Community Powered project, adding new functionality, tools and extensions for the Umbraco Deployment Tool, Courier 2eSyllabus: eSyllabus is designed to be a server-based syllabus distribution, assignment tracking and course scheduling tool. Currently it is undergoing initial development, including database IO design, and expected outcome of this project is a functional demo of the eSyllabus that operates on a local SQL database. We are aiming at making syllabus distribution as easy as possible and let students be able to track their classes, assignments and upcoming quizzes/exams in real time. Features like announ...Gestion de Proyectos: Gestion de ProyectosLync Christmas Tree Lights: An open-source Lync Christmas Tree Lights project using the Lync 2010 Client API, a FEZ Mini, and Adafruit 12mm Diffused Digital RGB LED Pixels. Currently Alpha release with only the first 5 lights configured to work.NopCommerce Milti Store Support: NopCommerce multi store supportnteditor: A custom user control to read and edit word documents like doc, rtf, etc. Attempts are being made to read and write or maybe simple convert DOCX and other documents. But current focus is upon word documents specially RTF which has a different version for different softwares.ReflectiveOrm: Simple ORM Application.SharePoint Audit (2007 & 2010): Powershell audit script to create an XML blueprint of any SharePoint 2007 or 2010 Farm. Taak-ondernemersaward-TI3A: Project is created for an award for businesses region Aalst, Belgium. Develeped in C#TheIncident2010: TheIncident2010 is a tech heavy Halloween event in Minnesota. The code hosted here is primarily XAML. http://www.TheIncident2010.comTrialsTower: TrialsTowerUmbraco 5 Contrib: Umbraco 5 is a community-driven, MIT-licensed free CMS and application framework based on .NET 4. This contrib project is for anyone wishing to develop plugins, Hive providers, future features or just to take a look at samples to get started with add-on development.VRE Sharepoint Entity Manager: Manage Sharepoint entities in a single list using search to locate data.Yahoo! Messenger (YMSG) for .NET: Library support YMSG protocol. It's based on libyahoo2.Your Last About Dialog: Are you tired of recreating the same about dialog logic and content for each Windows Phone app every time? "Your Last About Dialog" is a robust and generic, highly configurable implementation you can easily pull into your own app and set up for your needs. It is able to pull most data from your application automatically, supports fetching both text and Xaml content from remote sources (with fallback local content), and allows easy localization of the complete dialog content to all of the lang...

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Top ten things that don't make sense in The Walking Dead

    - by iamjames
    For those of you that don't know, The Walking Dead is a popular American TV show on AMC about a group of people trying to survive in a zombie-filled world.Here's the top ten eleven things that don't make sense on the show (and have never been explained) 1)  They never visit stores.  No Walmarts, Kmarts, Targets, shopping malls, pawn shops, gas stations, etc.  You'd think that would be the first place you'd visit for supplies, but they never have.  Not once.  There was a tiny corner store they visited in a small town, and while many products were already gone they did find several useful items.  2)  They never raid houses.  Why not?  One would imagine that they would want to search houses for useful items, but they don't.3)  They don't use 2 way radios.  Modern 2-way radios have a 36-mile range.  That's probably best possible range, but even if the range is only 10% of that, 3.6 miles, that's still more than enough for most situations, for the occasional "hey zombies attacking can you give me a hand?" or "there's zombies walking by stay inside until they leave" or "remember to pick up milk at the store love mom".  And yes they would need batteries or recharging, but they have been using gas-powered generators on the show and I'm sure a car charger would work.4)  They use gas-guzzling vehicles.  Every vehicle they have is from the 80s or 90s except for the new Kia SUV there for product placement.  Why?  They should all be driving new small SUVs or hybrids.  Visit a dealership and steal more fuel-efficient vehicles, because while the Walmart's might be empty from people raiding them for supplies, I'm sure most people weren't thinking "Gee, I should go car shopping" when the infection hit5)  They drive a motorcycle.  Seriously?  Let's find the least protective vehicle and drive that.  And while motorcycles get reasonable gas mileage, 5 people in a SUV gets better gas mileage per person than 5 people all driving motorcycles so it doesn't make economical sense either.6)  They drive loud vehicles.  The motorcycle used is commonly referred to as a chopper and is about as loud as a motorcycle can get.  The zombies are attracted to loud noise, so wouldn't it make more sense to drive vehicles that makes less sound?  Because as soon as you stop the bike and get off you're surrounded by zombies that heard you coming.  And it's not just the bike, the ~1980s Chevy SUV in the show is also very loud.7)  They never run out of food.  Seems like that would be a almost daily struggle, keeping enough food available for about a dozen people, yet I've never seen them visit a grocery store or local convenience store to stock up.8)  They don't carry swords, machetes, clubs, etc.  Let's face it, biting is not a very effective means of attack.  It's good for animals because they have fangs and little else, but humans have been finding better ways of killing each other since forever.  So why doesn't everyone on the show carry a sword or machete or at least a baseball bat?  Anything is better than wasting valuable bullets all the time.  Sure, dozen zombies approaching?  Shoot them.  One zombie approaching?  Save the bullet, cut off it's head.  9)  They do not wear protective clothing.  Human teeth are not exactly the sharpest teeth in the animal kingdom.  The leather shoes your dog ripped to shreds within minutes would probably take you days to bite through.  So why do they walk around half-naked?  Yes I know it's hot in Atlanta, but you'd think they'd at least have some tough leather coats or something for protection.  Maybe put a few small vent holes in the fabric if it's really hot.  Or better:  make your own chainmail.  Chainmail was used for thousands of years for protection from swords and is still used by scuba divers for protection from sharks.  If swords and sharks can't puncture it, human teeth don't stand a chance.  10)  They don't build barricades or dig trenches around properties.  In Season 2 they stayed at a farm in the middle of no where.  While being far away from people is a great way to stay far away from zombies, it would still make sense to build some sort of defenses.  Hordes of zombies would knock down almost any fence, but what about a trench or moat?  Maybe something not too wide so it can be jumped over easily but a zombie would fall into because I haven't seen too many jumping zombies on the show.  11)  They don't live in a mall or tall office building.  A mall would be perfect.  They have large security gates designed to keep even hundreds of people from breaking in and offer lots of supplies and food.  They're usually hundreds of thousands of square feet and fully enclosed, one could probably live their entire life happily in a mall.  Tall office building with on-site cafeteria would be another good choice.  They also usually offer good security and office furniture could be pushed out of the windows to crush approaching zombies, and the cafeteria is usually stocked to provide food for hundreds or thousands of office workers so food wouldn't be a problem for a long time. So there you have it, eleven things that don't make sense in The Walking Dead.  Have any of your own you'd like to add or were one of these things covered in the show?  Let me know in the comments.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >