Search Results

Search found 1192 results on 48 pages for 'grow'.

Page 46/48 | < Previous Page | 42 43 44 45 46 47 48  | Next Page >

  • Public class DiscoLight help

    - by luvthug
    Hi All, If some one can point me in the right direction for this code for my assigment I would really appreciate it. I have pasted the whole code that I need to complete but I need help with the following method public void changeColour(Circle aCircle) which is meant to allow to change the colour of the circle randomly, if 0 comes the light of the circle sgould change to red, 1 for green and 2 for purple. public class DiscoLight { /* instance variables */ private Circle light; // simulates a circular disco light in the Shapes window private Random randomNumberGenerator; /** * Default constructor for objects of class DiscoLight */ public DiscoLight() { super(); this.randomNumberGenerator = new Random(); } /** * Returns a randomly generated int between 0 (inclusive) * and number (exclusive). For example if number is 6, * the method will return one of 0, 1, 2, 3, 4, or 5. */ public int getRandomInt(int number) { return this.randomNumberGenerator.nextInt(number); } /** * student to write code and comment here for setLight(Circle) for Q4(i) */ public void setLight(Circle aCircle) { this.light = aCircle; } /** * student to write code and comment here for getLight() for Q4(i) */ public Circle getLight() { return this.light; } /** * Sets the argument to have a diameter of 50, an xPos * of 122, a yPos of 162 and the colour GREEN. * The method then sets the receiver's instance variable * light, to the argument aCircle. */ public void addLight(Circle aCircle) { //Student to write code here, Q4(ii) this.light = aCircle; this.light.setDiameter(50); this.light.setXPos(122); this.light.setYPos(162); this.light.setColour(OUColour.GREEN); } /** * Randomly sets the colour of the instance variable * light to red, green, or purple. */ public void changeColour(Circle aCircle) { //student to write code here, Q4(iii) if (getRandomInt() == 0) { this.light.setColour(OUColour.RED); } if (this.getRandomInt().equals(1)) { this.light.setColour(OUColour.GREEN); } else if (this.getRandomInt().equals(2)) { this.light.setColour(OUColour.PURPLE); } } /** * Grows the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is incremented in steps of 2, * the xPos and yPos are decremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void grow(int size) { //student to write code here, Q4(iv) } /** * Shrinks the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is decremented in steps of 2, * the xPos and yPos are incremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void shrink(int size) { //student to write code here, Q4(v) } /** * Expands the diameter of the light by the amount given by * sizeIncrease (changing colour as it grows). * * The method then contracts the light until it reaches its * original size (changing colour as it shrinks). */ public void lightCycle(int sizeIncrease) { //student to write code here, Q4(vi) } /** * Prompts the user for number of growing and shrinking * cycles. Then prompts the user for the number of units * by which to increase the diameter of light. * Method then performs the requested growing and * shrinking cycles. */ public void runLight() { //student to write code here, Q4(vii) } /** * Causes execution to pause by time number of milliseconds */ private void delay(int time) { try { Thread.sleep(time); } catch (Exception e) { System.out.println(e); } } }

    Read the article

  • Why do I get a segmentation fault while redirecting sys.stdout to Tkinter.Text widget in Python?

    - by Brent Nash
    I'm in the process of building a GUI-based application with Python/Tkinter that builds on top of the existing Python bdb module. In this application, I want to silence all stdout/stderr from the console and redirect it to my GUI. To accomplish this purpose, I've written a specialized Tkinter.Text object (code at the end of the post). The basic idea is that when something is written to sys.stdout, it shows up as a line in the "Text" with the color black. If something is written to sys.stderr, it shows up as a line in the "Text" with the color red. As soon as something is written, the Text always scrolls down to view the most recent line. I'm using Python 2.6.1 at the moment. On Mac OS X 10.5, this seems to work great. I have had zero problems with it. On RedHat Enterprise Linux 5, however, I pretty reliably get a segmentation fault during the run of a script. The segmentation fault doesn't always occur in the same place, but it pretty much always occurs. If I comment out the sys.stdout= and sys.stderr= lines from my code, the segmentation faults seem to go away. I'm sure there are other ways around this that I will probably have to resort to, but can anyone see anything I'm doing blatantly wrong here that could be causing these segmentation faults? It's driving me nuts. Thanks! PS - I realize redirecting sys.stderr to the GUI might not be a great idea, but I still get segmentation faults even when I only redirect sys.stdout and not sys.stderr. I also realize that I'm allowing the Text to grow indefinitely at the moment. class ConsoleText(tk.Text): '''A Tkinter Text widget that provides a scrolling display of console stderr and stdout.''' class IORedirector(object): '''A general class for redirecting I/O to this Text widget.''' def __init__(self,text_area): self.text_area = text_area class StdoutRedirector(IORedirector): '''A class for redirecting stdout to this Text widget.''' def write(self,str): self.text_area.write(str,False) class StderrRedirector(IORedirector): '''A class for redirecting stderr to this Text widget.''' def write(self,str): self.text_area.write(str,True) def __init__(self, master=None, cnf={}, **kw): '''See the __init__ for Tkinter.Text for most of this stuff.''' tk.Text.__init__(self, master, cnf, **kw) self.started = False self.write_lock = threading.Lock() self.tag_configure('STDOUT',background='white',foreground='black') self.tag_configure('STDERR',background='white',foreground='red') self.config(state=tk.DISABLED) def start(self): if self.started: return self.started = True self.original_stdout = sys.stdout self.original_stderr = sys.stderr stdout_redirector = ConsoleText.StdoutRedirector(self) stderr_redirector = ConsoleText.StderrRedirector(self) sys.stdout = stdout_redirector sys.stderr = stderr_redirector def stop(self): if not self.started: return self.started = False sys.stdout = self.original_stdout sys.stderr = self.original_stderr def write(self,val,is_stderr=False): #Fun Fact: The way Tkinter Text objects work is that if they're disabled, #you can't write into them AT ALL (via the GUI or programatically). Since we want them #disabled for the user, we have to set them to NORMAL (a.k.a. ENABLED), write to them, #then set their state back to DISABLED. self.write_lock.acquire() self.config(state=tk.NORMAL) self.insert('end',val,'STDERR' if is_stderr else 'STDOUT') self.see('end') self.config(state=tk.DISABLED) self.write_lock.release()

    Read the article

  • Smooth Div Scroll jquery not scrolling

    - by Razor
    The Smooth Div Scroll is great but for some reason the area no longer scrolls when I edit or remove the #makeMeScrollable or #makeMeScrollable div.scrollableArea * When I leave it as is it works. Which is a problem for customization. and it won't work after I take the "*" out of div.scrollableArea * If I edit the part with the It's been frustrating figuring out why that part which is supposed to be editable not work at all. Any help with this jquery would be helpful! Thanks in advance! /* You can alter this CSS in order to give SmoothDivScroll your own look'n'feel */ /* Invisible left hotspot */ div.scrollingHotSpotLeft { /* The hotspots have a minimum width of 75 pixels and if there is room the will grow and occupy 10% of the scrollable area (20% combined). Adjust it to your own taste. */ min-width: 75px; width: 10%; height: 100%; /* There is a big background image and it's used to solve some problems I experienced in Internet Explorer 6. */ background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; left: 0; /* The first cursor url is for Firefox and other browsers, the second is for Internet Explorer */ cursor: url(../images/cursors/cursor_arrow_left.cur), url(images/cursors/cursor_arrow_left.cur),w-resize; } /* Visible left hotspot */ div.scrollingHotSpotLeftVisible { background-image: url(../images/arrow_left.gif); background-color: #fff; background-repeat: no-repeat; /* Standard CSS3 opacity setting */ opacity: 0.35; /* Opacity for really old versions of Mozilla Firefox (0.9 or older) */ -moz-opacity: 0.35; /* Opacity for Internet Explorer. */ filter: alpha(opacity = 35); /* Use zoom to Trigger "hasLayout" in Internet Explorer 6 or older versions */ zoom: 1; } /* Invisible right hotspot */ div.scrollingHotSpotRight { min-width: 75px; width: 10%; height: 100%; background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; right: 0; cursor: url(../images/cursors/cursor_arrow_right.cur), url(images/cursors/cursor_arrow_right.cur),e-resize; } /* Visible right hotspot */ div.scrollingHotSpotRightVisible { background-image: url(../images/arrow_right.gif); background-color: #fff; background-repeat: no-repeat; opacity: 0.35; filter: alpha(opacity = 35); -moz-opacity: 0.35; zoom: 1; } /* The scroll wrapper is always the same width and height as the containing element (div). Overflow is hidden because you don't want to show all of the scrollable area. */ div.scrollWrapper { position: relative; overflow: hidden; width: 100%; height: 100%; } div.scrollableArea { position: relative; width: auto; height: 100%; } #makeMeScrollable { width:100%; height: 330px; position: relative; } #makeMeScrollable div.scrollableArea * { position: relative; float: left; margin: 0; padding: 0; } http://www.smoothdivscroll.com/ //^above link to the jquery I am talking about

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • jQuery hide/show with select tag

    - by Ozzy
    I'm relative new to jQuery and I've been asked to create a hide/show function with a select tag. The function pretty much would be when you click on one of the options in the select tag it will open a div associate with the div of course. To be honest I have no idea how approach this function. I need help urgently, I have already tried many online but none have seem to work. Find below the html code. Thanks. <div class="adwizard"> <select id="selectdrop" name="selectdrop" class="adwizard-bullet"> <option value="adwizard">AdWizard</option> <option value="collateral">Collateral Ordering Tool</option> <option value="ebrochure">eBrochures</option> <option value="brand">Brand Center</option> <option value="funtees">FunTees</option> </select> </div> <div class="panels"> <div id="adwizard" class="sub-box showhide"> <img src="../images/bookccl/img-adwizard.gif" width="95" height="24" alt="AdWizard" /> <p>Let Carnival help you grow your business with our great tools! Lor ipsum dolor sit amet. <a href="https://www.carnivaladwizard.com/home.asp">Learn More</a></p> </div> <div id="collateral" class="sub-box showhide"> <p>The Collateral Ordering Tool makes it easy for you to order destination brochures and the sales DVD for that upcoming event. <a href="http://carnival.litorders.com/workplace.asp">Learn More</a></p> </div> <div id="ebrochure" class="sub-box showhide"> <img src="../images/bookccl/img-ebrochure.gif" width="164" height="39" alt="Brochures" /> <p>Show your clients that you're listening to their specific vacation needs by delivering relevant planning info quickly. <a href="http://productiontrade.carnivalbrochures.com/start.aspx">Learn More</a></p> </div> <div id="brand" class="sub-box showhide"> <p>Carnival Brand Center is where you'll find information on our strategy, guidlines, templates and artwork. <a href="https://carnival.monigle2.net/user_info.asp?login_type=agent">Learn More</a></p> </div> <div id="funtees" class="sub-box showhide"> <img src="../images/bookccl/img-funtees.gif" width="164" height="39" alt="Funtees" /> <p>Create your very own Fun Design shirts to commemorate that special occasion aboard a Carnival "Fun Ship!" <a href="http://carnival.victorydyo.com/">Learn More</a></p> </div> </div><!-- ends .panel --> <a class="view" href="#">See All Marketing Tools</a> </div>

    Read the article

  • Multiple data series in real time plot

    - by Gr3n
    Hi, I'm kind of new to Python and trying to create a plotting app for values read via RS232 from a sensor. I've managed (after some reading and copying examples online) to get a plot working that updates on a timer which is great. My only trouble is that I can't manage to get multiple data series into the same plot. Does anyone have a solution to this? This is the code that I've worked out this far: import os import pprint import random import sys import wx # The recommended way to use wx with mpl is with the WXAgg backend import matplotlib matplotlib.use('WXAgg') from matplotlib.figure import Figure from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigCanvas, NavigationToolbar2WxAgg as NavigationToolbar import numpy as np import pylab DATA_LENGTH = 100 REDRAW_TIMER_MS = 20 def getData(): return int(random.uniform(1000, 1020)) class GraphFrame(wx.Frame): # the main frame of the application def __init__(self): wx.Frame.__init__(self, None, -1, "Usart plotter", size=(800,600)) self.Centre() self.data = [] self.paused = False self.create_menu() self.create_status_bar() self.create_main_panel() self.redraw_timer = wx.Timer(self) self.Bind(wx.EVT_TIMER, self.on_redraw_timer, self.redraw_timer) self.redraw_timer.Start(REDRAW_TIMER_MS) def create_menu(self): self.menubar = wx.MenuBar() menu_file = wx.Menu() m_expt = menu_file.Append(-1, "&Save plot\tCtrl-S", "Save plot to file") self.Bind(wx.EVT_MENU, self.on_save_plot, m_expt) menu_file.AppendSeparator() m_exit = menu_file.Append(-1, "E&xit\tCtrl-X", "Exit") self.Bind(wx.EVT_MENU, self.on_exit, m_exit) self.menubar.Append(menu_file, "&File") self.SetMenuBar(self.menubar) def create_main_panel(self): self.panel = wx.Panel(self) self.init_plot() self.canvas = FigCanvas(self.panel, -1, self.fig) # pause button self.pause_button = wx.Button(self.panel, -1, "Pause") self.Bind(wx.EVT_BUTTON, self.on_pause_button, self.pause_button) self.Bind(wx.EVT_UPDATE_UI, self.on_update_pause_button, self.pause_button) self.hbox1 = wx.BoxSizer(wx.HORIZONTAL) self.hbox1.Add(self.pause_button, border=5, flag=wx.ALL | wx.ALIGN_CENTER_VERTICAL) self.vbox = wx.BoxSizer(wx.VERTICAL) self.vbox.Add(self.canvas, 1, flag=wx.LEFT | wx.TOP | wx.GROW) self.vbox.Add(self.hbox1, 0, flag=wx.ALIGN_LEFT | wx.TOP) self.panel.SetSizer(self.vbox) #self.vbox.Fit(self) def create_status_bar(self): self.statusbar = self.CreateStatusBar() def init_plot(self): self.dpi = 100 self.fig = Figure((3.0, 3.0), dpi=self.dpi) self.axes = self.fig.add_subplot(111) self.axes.set_axis_bgcolor('white') self.axes.set_title('Usart data', size=12) pylab.setp(self.axes.get_xticklabels(), fontsize=8) pylab.setp(self.axes.get_yticklabels(), fontsize=8) # plot the data as a line series, and save the reference # to the plotted line series # self.plot_data = self.axes.plot( self.data, linewidth=1, color="blue", )[0] def draw_plot(self): # redraws the plot xmax = len(self.data) if len(self.data) > DATA_LENGTH else DATA_LENGTH xmin = xmax - DATA_LENGTH ymin = 0 ymax = 4096 self.axes.set_xbound(lower=xmin, upper=xmax) self.axes.set_ybound(lower=ymin, upper=ymax) # enable grid #self.axes.grid(True, color='gray') # Using setp here is convenient, because get_xticklabels # returns a list over which one needs to explicitly # iterate, and setp already handles this. # pylab.setp(self.axes.get_xticklabels(), visible=True) self.plot_data.set_xdata(np.arange(len(self.data))) self.plot_data.set_ydata(np.array(self.data)) self.canvas.draw() def on_pause_button(self, event): self.paused = not self.paused def on_update_pause_button(self, event): label = "Resume" if self.paused else "Pause" self.pause_button.SetLabel(label) def on_save_plot(self, event): file_choices = "PNG (*.png)|*.png" dlg = wx.FileDialog( self, message="Save plot as...", defaultDir=os.getcwd(), defaultFile="plot.png", wildcard=file_choices, style=wx.SAVE) if dlg.ShowModal() == wx.ID_OK: path = dlg.GetPath() self.canvas.print_figure(path, dpi=self.dpi) self.flash_status_message("Saved to %s" % path) def on_redraw_timer(self, event): if not self.paused: newData = getData() self.data.append(newData) self.draw_plot() def on_exit(self, event): self.Destroy() def flash_status_message(self, msg, flash_len_ms=1500): self.statusbar.SetStatusText(msg) self.timeroff = wx.Timer(self) self.Bind( wx.EVT_TIMER, self.on_flash_status_off, self.timeroff) self.timeroff.Start(flash_len_ms, oneShot=True) def on_flash_status_off(self, event): self.statusbar.SetStatusText('') if __name__ == '__main__': app = wx.PySimpleApp() app.frame = GraphFrame() app.frame.Show() app.MainLoop()

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Right-aligning button in a grid with possibly no content - stretch grid to always fill the page

    - by Peter Perhác
    Hello people, I am losing my patience with this. I am working on a Windows Phone 7 application and I can't figure out what layout manager to use to achieve the following: Basically, when I use a Grid as the layout root, I can't make the grid to stretch to the size of the phone application page. When the main content area is full, all is well and the button sits where I want it to sit. However, in case the page content is very short, the grid is only as wide as to accommodate its content and then the button (which I am desperate to keep near the right edge of the screen) moves away from the right edge. If I replace the grid and use a vertically oriented stack panel for the layout root, the button sits where I want it but then the content area is capable of growing beyond the bottom edge. So, when I place a listbox full of items into the main content area, it doesn't adjust its height to be completely in view, but the majority of items in that listbox are just rendered below the bottom edge of the display area. I have tried using a third-party DockPanel layout manager and then docked the button in it's top section and set the button's HorizontalAlignment="Right" but the result was the same as with the grid, it also shrinks in size when there isn't enough content in the content area (or when title is short). How do I do this then? ==EDIT== I tried WPCoder's XAML, only I replaced the dummy text box with what I would have in a real page (stackpanel) and placed a listbox into the ContentPanel grid. I noticed that what I had before and what WPCoder is suggesting is very similar. Here's my current XAML and the page still doesn't grow to fit the width of the page and I get identical results to what I had before: <phone:PhoneApplicationPage x:Name="categoriesPage" x:Class="CatalogueBrowser.CategoriesPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone" xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" SupportedOrientations="PortraitOrLandscape" Orientation="Portrait" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="768" xmlns:ctrls="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" shell:SystemTray.IsVisible="True"> <Grid x:Name="LayoutRoot" Background="Transparent"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <StackPanel Orientation="Horizontal" VerticalAlignment="Center" > <TextBlock Text="Browsing:" Margin="10,10" Style="{StaticResource PhoneTextTitle3Style}" /> <TextBlock x:Name="ListTitle" Text="{Binding DisplayName}" Margin="0,10" Style="{StaticResource PhoneTextTitle3Style}" /> </StackPanel> <Button Grid.Column="1" x:Name="btnRefineSearch" Content="Refine Search" Style="{StaticResource buttonBarStyle}" FontSize="14" /> </Grid> <Grid x:Name="ContentPanel" Grid.Row="1"> <ListBox x:Name="CategoryList" ItemsSource="{Binding Categories}" Style="{StaticResource CatalogueList}" SelectionChanged="CategoryList_SelectionChanged"/> </Grid> </Grid> </phone:PhoneApplicationPage> This is what the page with the above XAML markup looks like in the emulator:

    Read the article

  • Dynamically alter outter div as inner one gets bigger.

    - by Razor Storm
    I have two divs, one inside another. The outter one is called #wrapper, while the inner one is called #pad. Now #pad allows user input, and I have a javascript (jQuery) function that changes the content of #pad based on what the user input is. Sometimes, because of this function, #pad's content will cause the div to become more elongated than before. Now obviously I would wish for #wrapper to grow longer as well to accommodate this change in #pad's length. However, this does not occur. #wrapper { clear:both; padding-top:0.5em; /*padding-left:50px;*/ height: 100%; background-color: rgba(255,255,255,0.4); -moz-border-radius: 20px 20px 0px 0px; -webkit-border-radius: 20px 20px 0px 0px; border-radius: 20px 20px 0px 0px; } #pad { margin-top: 25px; -moz-border-radius: 5px; border: solid 1px #DDD; margin-left:25px; padding-left:25px; margin-right:25px; padding-right:25px; margin-bottom:2em; } This is the javascript function: function preview() { var id1=$("#input1").val(); var id2=$("#input2").val(); var id3=$("#input3").val(); var id4=$("#input4").val(); var id5=$("#input5").val(); if(id1!= null && id1!="") { if( $("#preview1").attr("src")!=id1) { $("#preview1").attr("src",id1); $("#preview1").fadeIn("slow"); } } else { $("#preview1").attr("src",""); $("#preview1").fadeOut("slow"); } if(id2!= null && id2!="") { if( $("#preview2").attr("src")!=id2) { $("#preview2").attr("src",id2); $("#preview2").fadeIn("slow"); } } else { $("#preview2").attr("src",""); $("#preview2").fadeOut("slow"); } if(id3!= null && id3!="") { if( $("#preview3").attr("src")!=id3) { $("#preview3").attr("src",id3); $("#preview3").fadeIn("slow"); } } else { $("#preview3").attr("src",""); $("#preview3").fadeOut("slow"); } if(id4!= null && id4!="") { if( $("#preview4").attr("src")!=id4) { $("#preview4").attr("src",id4); $("#preview4").fadeIn("slow"); } } else { $("#preview4").attr("src",""); $("#preview4").fadeOut("slow"); } if(id5!= null && id5!="") { if( $("#preview5").attr("src")!=id5) { $("#preview5").attr("src",id5); $("#preview5").fadeIn("slow"); } } else { $("#preview5").attr("src",""); $("#preview5").fadeOut("slow"); } setTimeout("preview()",1000); $("#wrapper").attr("height",$(document).attr("height")); } http://surveys.mylifeisberkeley.com/

    Read the article

  • xts error - order.by requires an appropriate time-based object

    - by Samo
    I can not resolve why error in simple creation of xts object xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) Error in xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) : order.by requires an appropriate time-based object appeared while this was working perfectly 14 days ago when I last used the same code (since then the only difference is that TICK.NYSE grow in length since data was added since then). More details below: > Sys.getenv("TZ") [1] "EST" > tail(xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE))) Error in xts(rep(0, NROW(TICK.NYSE)), order.by = index(TICK.NYSE)) : order.by requires an appropriate time-based object > class(index(TICK.NYSE["T09:30/T09:31"])) [1] "POSIXct" > tail(xts(rep(0, NROW(tail(TICK.NYSE))), order.by = index(tail(TICK.NYSE)))) Error in xts(rep(0, NROW(tail(TICK.NYSE))), order.by = index(tail(TICK.NYSE))) : order.by requires an appropriate time-based object > tail(TICK.NYSE) TICK-NYSE.Open TICK-NYSE.High TICK-NYSE.Low TICK-NYSE.Close 2012-03-15 14:54:00 -278 -89 -299 -89 2012-03-15 14:55:00 -89 427 -89 201 2012-03-15 14:56:00 201 318 30 234 2012-03-15 14:57:00 234 242 -222 -64 2012-03-15 14:58:00 -64 346 -82 346 2012-03-15 14:59:00 346 525 36 525 TICK-NYSE.Volume TICK-NYSE.WAP TICK-NYSE.hasGaps 2012-03-15 14:54:00 0 0 0 2012-03-15 14:55:00 0 0 0 2012-03-15 14:56:00 0 0 0 2012-03-15 14:57:00 0 0 0 2012-03-15 14:58:00 0 0 0 2012-03-15 14:59:00 0 0 0 TICK-NYSE.Count 2012-03-15 14:54:00 31 2012-03-15 14:55:00 31 2012-03-15 14:56:00 31 2012-03-15 14:57:00 31 2012-03-15 14:58:00 29 2012-03-15 14:59:00 30 > str(TICK.NYSE) An ‘xts’ object from 2011-01-18 09:30:00 to 2012-03-15 14:59:00 containing: Data: num [1:114090, 1:8] -5 -144 24 -148 -184 -77 20 121 111 -60 ... - attr(*, "dimnames")=List of 2 ..$ : NULL ..$ : chr [1:8] "TICK-NYSE.Open" "TICK-NYSE.High" "TICK-NYSE.Low" "TICK-NYSE.Close" ... Indexed by objects of class: [POSIXct,POSIXt] TZ: xts Attributes: List of 4 $ from : chr "20110119 23:59:59" $ to : chr "20110124 23:59:59" $ src : chr "IB" $ updated: POSIXct[1:1], format: "2012-01-19 02:34:52" > str(index(TICK.NYSE)) Class 'POSIXct' atomic [1:114090] 1.3e+09 1.3e+09 1.3e+09 1.3e+09 1.3e+09 ... ..- attr(*, "tzone")= chr "" ..- attr(*, "tclass")= chr [1:2] "POSIXct" "POSIXt" > Sys.getenv("TZ") [1] "EST" > tail(index(TICK.NYSE)) [1] "2012-03-15 14:54:00 EST" "2012-03-15 14:55:00 EST" [3] "2012-03-15 14:56:00 EST" "2012-03-15 14:57:00 EST" [5] "2012-03-15 14:58:00 EST" "2012-03-15 14:59:00 EST" > head(index(TICK.NYSE)) [1] "2011-01-18 09:30:00 EST" "2011-01-18 09:31:00 EST" [3] "2011-01-18 09:32:00 EST" "2011-01-18 09:33:00 EST" [5] "2011-01-18 09:34:00 EST" "2011-01-18 09:35:00 EST" > Sys.info() sysname "Linux" release "3.0.0-16-generic" version "#28-Ubuntu SMP Fri Jan 27 17:44:39 UTC 2012" > sessionInfo() R version 2.14.1 (2011-12-22) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=en_US.UTF-8 [9] LC_ADDRESS=en_US.UTF-8 LC_TELEPHONE=en_US.UTF-8 [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] lattice_0.20-0 multicore_0.1-7 [3] doSNOW_1.0.5 snow_0.3-8 [5] doRedis_1.0.4 rredis_1.6.3 [7] foreach_1.3.2 codetools_0.2-8 [9] iterators_1.0.5 PerformanceAnalytics_1.0.3.3 [11] quantstrat_0.6.1 blotter_0.8.4 [13] twsInstrument_1.3-3 FinancialInstrument_0.10.6 [15] IBrokers_0.9-6 quantmod_0.3-18 [17] TTR_0.21-0 xts_0.8-4 [19] Defaults_1.1-1 strucchange_1.4-6 [21] sandwich_2.2-8 zoo_1.7-7 [23] rj_1.0.2-5 loaded via a namespace (and not attached): [1] grid_2.14.1 tools_2.14.1 > dput(tail(TICK.NYSE)) structure(c(-385, -213, -42, -334, -233, -111, -121, 20, -14, -125, -73, 265, -583, -269, -426, -520, -443, -440, -213, -42, -334, -233, -111, 119, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 31, 31, 30, 30, 31, 31), class = c("xts", "zoo" ), .indexCLASS = c("POSIXct", "POSIXt"), .indexTZ = "", from = "20110119 23:59:59", to = "20110124 23:59:59", src = "IB", updated = structure(1326958492.96405, class = c("POSIXct", "POSIXt")), index = structure(c(1331927640, 1331927700, 1331927760, 1331927820, 1331927880, 1331927940), tzone = "", tclass = c("POSIXct", "POSIXt")), .Dim = c(6L, 8L), .Dimnames = list(NULL, c("TICK-NYSE.Open", "TICK-NYSE.High", "TICK-NYSE.Low", "TICK-NYSE.Close", "TICK-NYSE.Volume", "TICK-NYSE.WAP", "TICK-NYSE.hasGaps", "TICK-NYSE.Count")))

    Read the article

  • How to safely operate on parameters in threads, using C++ & Pthreads?

    - by ChrisCphDK
    Hi. I'm having some trouble with a program using pthreads, where occassional crashes occur, that could be related to how the threads operate on data So I have some basic questions about how to program using threads, and memory layout: Assume that a public class function performs some operations on some strings, and returns the result as a string. The prototype of the function could be like this: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); return result; } Is it correct to assume that strings, being dynamic, are stored on the heap, but that a reference to the string is allocated on the stack at runtime? Stack: [Some mem addr] pointer address to where the string is on the heap Heap: [Some mem addr] memory allocated for the initial string which may grow or shrink To make the function thread safe, the function is extended with the following mutex (which is declared as private in the "SomeClass") locking: std::string SomeClass::somefunc(const std::string &strOne, const std::string &strTwo) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted std::string result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); return result; } Is this a safe way of locking down the operations being done on the strings (all three), or could a thread be stopped by the scheduler in the following cases, which I'd assume would mess up the intended logic: a. Right after the function is called, and the parameters: strOne & strTwo have been set in the two reference pointers that the function has on the stack, the scheduler takes away processing time for the thread and lets a new thread in, which overwrites the reference pointers to the function, which then again gets stopped by the scheduler, letting the first thread back in? b. Can the same occur with the "result" string: the first string builds the result, unlocks the mutex, but before returning the scheduler lets in another thread which performs all of it's work, overwriting the result etc. Or are the reference parameters / result string being pushed onto the stack while another thread is doing performing it's task? Is the safe / correct way of doing this in threads, and "returning" a result, to pass a reference to a string that will be filled with the result instead: void SomeClass::somefunc(const std::string &strOne, const std::string &strTwo, std::string result) { pthread_mutex_lock(&someclasslock); //Error checking of strings have been omitted result = strOne.substr(0,5) + strTwo.substr(0,5); pthread_mutex_unlock(&someclasslock); } The intended logic is that several objects of the "SomeClass" class creates new threads and passes objects of themselves as parameters, and then calls the function: "someFunc": int SomeClass::startNewThread() { pthread_attr_t attr; pthread_t pThreadID; if(pthread_attr_init(&attr) != 0) return -1; if(pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED) != 0) return -2; if(pthread_create(&pThreadID, &attr, proxyThreadFunc, this) != 0) return -3; if(pthread_attr_destroy(&attr) != 0) return -4; return 0; } void* proxyThreadFunc(void* someClassObjPtr) { return static_cast<SomeClass*> (someClassObjPtr)->somefunc("long string","long string"); } Sorry for the long description. But I hope the questions and intended purpose is clear, if not let me know and I'll elaborate. Best regards. Chris

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • CodePlex Daily Summary for Monday, December 06, 2010

    CodePlex Daily Summary for Monday, December 06, 2010Popular ReleasesAura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.New ProjectsAboutTime: The AboutTime WPF controls project is aimed at developing custom controls that relate to time.aReader: aReader is a free software, it's used as an XPS document reader. It's developed in C# Language and use Windows Presentation Foundation technology with .NET Framework 3.5. Mixed with Ribbon Controls Library for GUI (Graphic User Interface) make this application user friendly.Battle Net Info: Battle Net Info provides information of the StarCraft2 player from his profile pageBencoder: Library for encode/decode bencode file or string. It's developing on C#.BiBongNet: BiBongNet Project.Binhnt: BinhntC++ Bloom Filter Library: C++ Bloom Filter LibraryChild Sponsorship Manager: Sponsorship Manager is developed for a NPO that provides child sponsorship in developing countries. It is possible to track sponsor child relations, gifts and payments. It is developed in visual basic . netDocBlogger: This is a tool for automatically converting existing XML comments from your project into MSDN style HTML for posting to the codeplex site. This will use the MetaBlog API to post code, but can be used in a copy paste fashion right away.Dynamic Rdlc WebControl: "Dynamic Rdlc WebControl" is an ASP .NET WebControl to generate dynamic reports in RDLC format without generate physical files. Suports groups and totalizers. It is developed with Microsoft Visual Studio 2010, ASP .NET and C# 4.Fake Call for Windows Phone 7: Coding4Fun Windows Phone 7 fake call applicationFlow launcher: Flow is the worlds fastest application launcher, using an onscreen keyboard and mnemonics to achieve lightning fast shortcut launching.GaDotNet: GaDotNet is an open source library designed to make it easy to log page views, events and transactions, through c# code, without using JavaScript or even needing to have a browser.HackerNews for WP7: HackerNews is a WP7 client for the HackerNews website.How much is this meeting costing us?: Coding4Fun Windows Phone 7 "How much is this meeting costing us?" applicationKLAB: KLABMap Navigator: Map Navigator - it's a silverlight application intended to work with maps.MNRT: MNRT implements (demonstrates) several techniques to realize fast global illumination for dynamic scenes on Graphics Processing Units (GPUs) using CUDA. A GPU-based kd-tree was implemented to accelerate both ray tracing and photon mapping.MVC Helpers: MVC Helper makes developing views easier. It contains extended helpers classes to render view content. It is developed in C#.Net Extended Helpers for Grid has been created so far. MVCPets: This is a projected dedicated to providing a free platform to be used by animal rescue organizations. The hope is that this project can fill the void for those rescue groups that can't afford to pay a professional web designer/developer.MyGraphicProgram: ???????????????NAI: This project is a step by step illustration of some Numerical Analysis methods.Nemono: Nemono is an application that runs in the background, and is activated by pressing a key combination like ALT+W. When activated, Nemono uses context awareness to present relevant shortcuts to the user, and mnemonics to execute shortcuts.opojo: opojoOxyPlot: OxyPlot is a .NET library for making XY line plots. The focus is on simplicity and performance. The library contains custom controls for WPF and Windows Forms. The plots can also be exported to SVG, PDF and PNG.PowerChumby: PowerChumby is a Perl CGI script and a PowerShell module that gives you a PowerShell way of controlling your Chumby.RHoK Berlin Visio Projekt: Random Hacks of Kindness - Berlin Projekt für die Senatsverwaltung für Gesundheit, Umwelt und Verbraucherschutz Query, integrate and display external data in Microsoft Visio. It's developed in C#.sc2md: starcraft.md news portalSlide Show: Coding4Fun Windows Phone 7 Slide Show applicationsmartcon: smart control centerTFS Fav source: Favourites for source location in VSTwitter Followers Monitor: Free and Open Source tool that will let you monitor any Twitter account for its new & lost followers even if it's not yours and you don't have its credentials. It allows you to add several Twitter accounts and be updated right from your desktop.

    Read the article

  • CodePlex Daily Summary for Wednesday, December 01, 2010

    CodePlex Daily Summary for Wednesday, December 01, 2010Popular ReleasesUltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.2 Beta2: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ContentRoot property) - Added properties AccessKey, AccessKeyModifier, AccessKeyElemen...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...Microsoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.MDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitCropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...BCLExtensions: BCL Extensions v1.0: The files associated with v1.0 of the BCL Extensions library.XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)New ProjectsAbstract SQL: ADO.NET Sql classes wrapper; provides a clean fluent interface library that allows you to write very concise code and avoid the repetitiveness of ADO.NET. It can be used in all types of applications, even supports CLR stored procedures. It is written in C# 2.0.AI: The Artificial Intelligence program built on F#.Another .NET wrapper for the MailChimp API: A .NET wrapper for the MailChimp API 1.3 written in F# by DK.App-V Tool Suite: A collection of tools for Microsoft Application Virtualization (App-V). These tools were developed at Sinclair Community College in the process of setting up and then supporting its App-V implementation.b2b: Project-01tBham.CmuCam: A library and GUI front-end for the CMUcam series of cameras for use by .NET-based applications, the GUI can also run standalone.BizTalk Deployment Tool: Yet another BizTalk Deployment Tool to make it easier for BizTalk deployment that needs to support orchestration versioning, multiple environments. Features includes but not limited to: GAC Verification, Receive Locations and Send Ports management, Orchestration States managementCertificate Request (PKCS#10) Generator: A .NET application that can create PKCS#10 Certificate Requests, either by generating a new key or reusing a preexisting one. Minimum requirement : Windows Vista and above. .NET 2.0.Cloud Billing: Cloud billing proposal.Currency Converter: Coding4Fun Windows Phone 7 Currency ConverterDiffLib: A diff implementation class library for .NET 3.5 written in C#.expression Blend 4.0 Comment Uncomment Xaml Code Extension: BlendShortCuts Extension makes it easier for blend users to comment and uncomment xaml code. you'll no longer have to insert tag <!-- --> just press ALt+C and ALT+U it's developed in c# (.Net framework 4.0, microsoft expression library)Game Studio: Game Studio is an Integrated Development Environment “IDE” that helps game developers in designing their games. The software generates code for the mesh, models, pictures and sounds. It has a form designer, code editor and a special framework for the “Game Studio”.Gridify for ASP.NET MVC: Easy solution for grids on top of ASP.NET MVC Make grids from your data tables in a really lightweight manner! How lightweight? Well, exactly TWO line changes. You don't have to add new action parameters or anything. Really simple!In-House Inventory System: a normal inventory management system, normal use case and normal function.In-House Money Saver: just for school project, it may not be useful, however, it is my first project using Microsoft technology.ITICup2009: Programs used in ITICup 2009.libobs++: Implementation of a signal/slot system, created exclusively for developers that uses Visual Studio's 2010 IDE/Compiler. The classes are templated, and really easy to learn. The callbacks are fast, and type-safe.Longan ERP: Open Source Business Solutions.Lucienne - WebScripting Assignment: Lucienne - WebScripting AssignmentMercurial.Net: .NET wrapper class library for the Mercurial Distributed Version Control System (DVCS) - (http://mercurial.selenic.com/), written in C# 3.0 for the .NET 3.5 Client Profile runtime.Mini C++ UI Framework: Mini C++ UI Framework for my work.MinuRaamu: MeieRaamuMobile Device Browser File: The Mobile Browser Definition File contains definitions for individual mobile devices and browsers. At run time, ASP.NET uses the information in the request header to determine what type of device/browser has made the request.NKinect: .NET 4.0 (C++/CLI) based open source implementation of Microsoft Kinect. Currently supports CodeLaboratories NUI SDK, but will be brought to OpenKinect/libfreenect when a Windows version is stable.Oblivion Cell - Oblivion mmo project: We are working on creating a mmorpg mod/Addon for Oblivion using C# and hooking to the accual game with obse and a few other mods. We also use Cell Framework for our base server system.Optional: Optional is a library to create options and commands from command-line arguments. It uses Convention over Configuration to get out of your way. Attributes can be used to set properties which differ from the convention.Paypal adaptive payments using .NET (C#): This is a C# project to help you interface with the PayPal adaptive payments API. https://www.x.com/community/ppx/adaptive_payments. POS bd: Dynamic POS. this project is being devoloped on focused to local market only. the initial project is projected for a single company whose main business is selling lighting-bulb instruments.PowerEvents for Windows PowerShell: A Microsoft Windows PowerShell module to assist with managing permanent WMI event consumer registrations. You can use this module to register for, and respond to, system-level events available to WMI.PPL Daily Report Helper: Daily Reporting Helper Tool for Phoenix Propulsion LabsRandom Passwd Generator: This is a simple program developed in C# that generates random passwords of the specified length with the specified characters to be used. It's in beta version.SharePoint MUI Manager: The SharePoint MUI Manager allows you to translate user-specified text, such as the Title and Description of the site, throught the web interface. There is no need to download, edit and upload a RESX file. Sqlite Client for Windows Phone: Sqlite client for Windows Phone 7 . Supports transactionsTouchToolkit: A toolkit to simplify the multi-touch application development and testing complexities. It currently supports WPF and Silverlight.TSI4: Proyecto para facultad de ingenieríaVS2010 Debugger Visualizers Contrib: This project is for hosting user-contributed debugger visualizers for Microsoft Visual Studio 2010.Windows Shell Framework: Windows Shell Framework is a managed wrappers for a subset of the windows shell. This Project is for of .NET Shell Namespace Extension FrameworkWork in Progress: Work in progressWPFtest: A simpel test project for experimenting with WPF.YingYangXonix: YingYangXonixZeroUnit.net: The zero dependency, zero friction, sugar free Unit Testing framework for .Net.ZXing barcode for Windows Phone: Barcode support for Windows Phone 7 using ZXing

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • More Animation - Self Dismissing Dialogs

    - by Duncan Mills
    In my earlier articles on animation, I discussed various slide, grow and  flip transitions for items and containers.  In this article I want to discuss a fade animation and specifically the use of fades and auto-dismissal for informational dialogs.  If you use a Mac, you may be familiar with Growl as a notification system, and the nice way that messages that are informational just fade out after a few seconds. So in this blog entry I wanted to discuss how we could make an ADF popup behave in the same way. This can be an effective way of communicating information to the user without "getting in the way" with modal alerts. This of course, has been done before, but everything I've seen previously requires something like JQuery to be in the mix when we don't really need it to be.  The solution I've put together is nice and generic and will work with either <af:panelWindow> or <af:dialog> as a the child of the popup. In terms of usage it's pretty simple to use we  just need to ensure that the popup itself has clientComponent is set to true and includes the animation JavaScript (animateFadingPopup) on a popupOpened event: <af:popup id="pop1" clientComponent="true">   <af:panelWindow title="A Fading Message...">    ...  </af:panelWindow>   <af:clientListener method="animateFadingPopup" type="popupOpened"/> </af:popup>   The popup can be invoked in the normal way using showPopupBehavior or JavaScript, no special code is required there. As a further twist you can include an additional clientAttribute called preFadeDelay to define a delay before the fade itself starts (the default is 5 seconds) . To set the delay to just 2 seconds for example: <af:popup ...>   ...   <af:clientAttribute name="preFadeDelay" value="2"/>   <af:clientListener method="animateFadingPopup" type="popupOpened"/>  </af:popup> The Animation Styles  As before, we have a couple of CSS Styles which define the animation, I've put these into the skin in my case, and, as in the other articles, I've only defined the transitions for WebKit browsers (Chrome, Safari) at the moment. In this case, the fade is timed at 5 seconds in duration. .popupFadeReset {   opacity: 1; } .popupFadeAnimate {   opacity: 0;   -webkit-transition: opacity 5s ease-in-out; } As you can see here, we are achieving the fade by simply setting the CSS opacity property. The JavaScript The final part of the puzzle is, of course, the JavaScript, there are four functions, these are generic (apart from the Style names which, if you've changed above, you'll need to reflect here): The initial function invoked from the popupOpened event,  animateFadingPopup which starts a timer and provides the initial delay before we start to fade the popup. The function that applies the fade animation to the popup - initiatePopupFade. The callback function - closeFadedPopup used to reset the style class and correctly hide the popup so that it can be invoked again and again.   A utility function - findFadeContainer, which is responsible for locating the correct child component of the popup to actually apply the style to. Function - animateFadingPopup This function, as stated is the one hooked up to the popupOpened event via a clientListener. Because of when the code is called it does not actually matter how you launch the popup, or if the popup is re-used from multiple places. All usages will get the fade behavior. /**  * Client listener which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param event  */ function animateFadingPopup(event) { var fadePopup = event.getSource();   var fadeCandidate = false;   //Ensure that the popup is initially Opaque   //This handles the situation where the user has dismissed   //the popup whilst it was in the process of fading   var fadeContainer = findFadeContainer(fadePopup);   if (fadeContainer != null) {     fadeCandidate = true;     fadeContainer.setStyleClass("popupFadeReset");   }   //Only continue if we can actually fade this popup   if (fadeCandidate) {   //See if a delay has been specified     var waitTimeSeconds = event.getSource().getProperty('preFadeDelay');     //Default to 5 seconds if not supplied     if (waitTimeSeconds == undefined) {     waitTimeSeconds = 5;     }     // Now call the fade after the specified time     var fadeFunction = function () {     initiatePopupFade(fadePopup);     };     var fadeDelayTimer = setTimeout(fadeFunction, (waitTimeSeconds * 1000));   } } The things to note about this function is the initial check that we have to do to ensure that the container is currently visible and reset it's style to ensure that it is.  This is to handle the situation where the popup has begun the fade, and yet the user has still explicitly dismissed the popup before it's complete and in doing so has prevented the callback function (described later) from executing. In this particular situation the initial display of the dialog will be (apparently) missing it's normal animation but at least it becomes visible to the user (and most users will probably not notice this difference in any case). You'll notice that the style that we apply to reset the  opacity - popupFadeReset, is not applied to the popup component itself but rather the dialog or panelWindow within it. More about that in the description of the next function findFadeContainer(). Finally, assuming that we have a suitable candidate for fading, a JavaScript  timer is started using the specified preFadeDelay wait time (or 5 seconds if that was not supplied). When this timer expires then the main animation styleclass will be applied using the initiatePopupFade() function Function - findFadeContainer As a component, the <af:popup> does not support styleClass attribute, so we can't apply the animation style directly.  Instead we have to look for the container within the popup which defines the window object that can have a style attached.  This is achieved by the following code: /**  * The thing we actually fade will be the only child  * of the popup assuming that this is a dialog or window  * @param popup  * @return the component, or null if this is not valid for fading  */ function findFadeContainer(popup) { var children = popup.getDescendantComponents();   var fadeContainer = children[0];   if (fadeContainer != undefined) {   var compType = fadeContainer.getComponentType();     if (compType == "oracle.adf.RichPanelWindow" || compType == "oracle.adf.RichDialog") {     return fadeContainer;     }   }   return null; }  So what we do here is to grab the first child component of the popup and check its type. Here I decided to limit the fade behaviour to only <af:dialog> and <af:panelWindow>. This was deliberate.  If  we apply the fade to say an <af:noteWindow> you would see the text inside the balloon fade, but the balloon itself would hang around until the fade animation was over and then hide.  It would of course be possible to make the code smarter to walk up the DOM tree to find the correct <div> to apply the style to in order to hide the whole balloon, however, that means that this JavaScript would then need to have knowledge of the generated DOM structure, something which may change from release to release, and certainly something to avoid. So, all in all, I think that this is an OK restriction and frankly it's windows and dialogs that I wanted to fade anyway, not balloons and menus. You could of course extend this technique and handle the other types should you really want to. One thing to note here is the selection of the first (children[0]) child of the popup. It does not matter if there are non-visible children such as clientListener before the <af:dialog> or <af:panelWindow> within the popup, they are not included in this array, so picking the first element in this way seems to be fine, no matter what the underlying ordering is within the JSF source.  If you wanted a super-robust version of the code you might want to iterate through the children array of the popup to check for the right type, again it's up to you.  Function -  initiatePopupFade  On to the actual fading. This is actually very simple and at it's heart, just the application of the popupFadeAnimate style to the correct component and then registering a callback to execute once the fade is done. /**  * Function which will kick off the animation to fade the dialog and register  * a callback to correctly reset the popup once the animation is complete  * @param popup the popup we are animating  */ function initiatePopupFade(popup) { //Only continue if the popup has not already been dismissed    if (popup.isPopupVisible()) {   //The skin styles that define the animation      var fadeoutAnimationStyle = "popupFadeAnimate";     var fadeAnimationResetStyle = "popupFadeReset";     var fadeContainer = findFadeContainer(popup);     if (fadeContainer != null) {     var fadeContainerReal = AdfAgent.AGENT.getElementById(fadeContainer.getClientId());       //Define the callback this will correctly reset the popup once it's disappeared       var fadeCallbackFunction = function (event) {       closeFadedPopup(popup, fadeContainer, fadeAnimationResetStyle);         event.target.removeEventListener("webkitTransitionEnd", fadeCallbackFunction);       };       //Initiate the fade       fadeContainer.setStyleClass(fadeoutAnimationStyle);       //Register the callback to execute once fade is done       fadeContainerReal.addEventListener("webkitTransitionEnd", fadeCallbackFunction, false);     }   } } I've added some extra checks here though. First of all we only start the whole process if the popup is still visible. It may be that the user has closed the popup before the delay timer has finished so there is no need to start animating in that case. Again we use the findFadeContainer() function to locate the correct component to apply the style to, and additionally we grab the DOM id that represents that container.  This physical ID is required for the registration of the callback function. The closeFadedPopup() call is then registered on the callback so as to correctly close the now transparent (but still there) popup. Function -  closeFadedPopup The final function just cleans things up: /**  * Callback function to correctly cancel and reset the style in the popup  * @param popup id of the popup so we can close it properly  * @param contatiner the window / dialog within the popup to actually style  * @param resetStyle the syle that sets the opacity back to solid  */ function closeFadedPopup(popup, container, resetStyle) { container.setStyleClass(resetStyle);   popup.cancel(); }  First of all we reset the style to make the popup contents opaque again and then we cancel the popup.  This will ensure that any of your user code that is waiting for a popup cancelled event will actually get the event, additionally if you have done this as a modal window / dialog it will ensure that the glasspane is dismissed and you can interact with the UI again.  What's Next? There are several ways in which this technique could be used, I've been working on a popup here, but you could apply the same approach to in-line messages. As this code (in the popup case) is generic it will make s pretty nice declarative component and maybe, if I get time, I'll look at constructing a formal Growl component using a combination of this technique, and active data push. Also, I'm sure the above code can be improved a little too.  Specifically things like registering a popup cancelled listener to handle the style reset so that we don't loose the subtle animation that takes place when the popup is opened in that situation where the user has closed the in-fade dialog.

    Read the article

  • CodePlex Daily Summary for Sunday, March 13, 2011

    CodePlex Daily Summary for Sunday, March 13, 2011Popular ReleasesImage.Viewer: 2011.2: Whats new for Image.Viewer 2011.2: New open from file New about dialog Minor Bug Fix's, improvements and speed upsIronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...XML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...Composite C1 CMS: Composite C1 2.1 (2.1.4087.22991): Composite C1 is a fully featured pro open source CMS for quick/custom website creation. Modern architecture, very user friendly. Use wizards or HTML/CSS/XSLT/ASP.NET/LINQ/.NET4ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Release of the WPF version, most of the general issues have been resolved. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. Whe...PHP Manager for IIS: PHP Manager 1.1.2 for IIS 7: This is a localization release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus a few bug fixes (see change list for more details). Most importantly this release is translated into five languages: German - the translation is provided by Christian Graefe Dutch - the translation is provided by Harrie Verveer Turkish - the translation is provided by Yusuf Oztürk Japanese - the translation is provided by Kenichi Wakasa Russian - the translation is provid...TweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comDirectQ: Release 1.8.7 (RC2): More fixes and improvements. Note for multiplayer - you may need to set r_waterwarp to 0 or 2 before connecting to a server, otherwise you will get a "Mod_PointInLeaf: bad model" error and not be able to connect. You can set it back to 1 after you connect, of course. This only came to light after releasing, and will be fixed in the next one.Microsoft All-In-One Code Framework - a centralized code sample library: Visual Studio 2008 Code Samples 2011-03-09: Code samples for Visual Studio 2008Office Web.UI: Version 2.4: After having lost all modifications done for 2.3. I finally did it again... Have a look at http://www.officewebui.com/change-log Also, the documentation continues to grow... http://www.officewebui.com/category/kb ThanksmyCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...patterns & practices : Composite Services: Composite Services Guidance - CTP2: Overview The Composite Services guidance (codename Reykjavik) provides best practices and capabilities for applying industry-known SOA design patterns when building robust, connected, service-oriented composite enterprise applications. These capabilities are implemented as a set of reusable components for analytic tracing, service virtualization, metadata centralization and versioning, and policy centralization as well as exception management, included in this release. Changes in this CTP ...Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.New ProjectsAxvius.Testing.NUnit: Provides fluent assertion methods for NUnit.Azure Conversion plugin for VS 2010: Azure Conversion Wizard is a plugin for VS 2010. The wizard converts your AZP.NET solutions for .NET 3.5 and higher to Windows Azure Platform. BlogSpam.net API: A simple C#.NET wrapper for the BlogSpam.net comment spam service.Bonyad Project Managing System: This is for managing bonyad projects.Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.code: This is demo summary ! Education Fellows Meta Service: Rebranded LaEMWS.Family Guide: This project is currently just for learning purposes. But it shall evolve to a fully functional solution later.HL7.NET: HL7.net groups all the neccessary code for managing HL7 standard. It makes it easy to add HL7 interoperability to your application. It is developed in C#.NETHyper-V Monitor Gadget: Hyper-V Monitor Gadget for Windows Sidebar that lists Hyper-V servers and their VM's. Supports status information and controlling them directly from the gadget.jp110311: Azure 312 ??????????? Azure ????????????????????????????????????????。Lugene: Index generation and distribution framework based on Lucene.net and index schemas provided by Lumen. Includes support for incremental updates of indexes, and a plugin framework for custom index providers.Planet Me: ...poc Dev: this is pco projectProcon 2: Procon Frostbite, rebuilt from the ground up.RandomNumbersGames: Small random numbers game which very helpful for people who want to develop their focusSistema Creeo: Sistema de Gestão de clínicasSPDiscussionBoard: SPDiscussionBoard started on 2005 while developing an ASP.NET forum and taking into consideration that I may use it on SharePoint in a day. Yesterday I remembered that, and started to alter it to run as a webpart inside SharePoint 2010, and it took about 4 hours to work.Video Commander: VideoCommander is an external control interface for vlc player. It enables the user to create play lists with start- and stop time and to play videos on a specified display. VideoCommander was especially developed for presenting videos on events like church services or theater.Windows Live Writer - Insert Tag Snippet Plugin: Insert Tag Snippet plugin is used to select a snippet from a collection of "user" defined snippets and insert it to the post. Prefix and suffix tags can also be defined. Especially useful for <code> and <pre> kind of tags. With Insert Tag Snippet plugin, you don't need to switch back to HTML view and than back to normal view again. The code is open for all communities' members to see how to develop a plugin for Windows Live Writer using C#.WMI Connection for ADO.NET: A lot of application have excellent support for ADO.NET connections. But many of them weren't desigend to work with WMI (Windows Management Instrumentation). This project puts them together, and adds the capability to access WMI data through the IDBConnection interface.Zugger: Zugger is an assistant application for those people who are using Zentao PMS. It provides the functions: 1. Your the counts of your bugs and tasks. 2. The lists of bugs and tasks. 3. Quick edit for bugs and tasks. 4. Notification of new bugs and tasks. It is developed in C# WPF

    Read the article

  • CodePlex Daily Summary for Monday, March 14, 2011

    CodePlex Daily Summary for Monday, March 14, 2011Popular ReleasesProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)Personal Activity Monitor - increas your productivity by eliminating timewasters: Personal Activity Monitor v0.1.2: removed PreEmptive monitoring attributes - maybe will be back in the future added scrollbars to the list of apps :)BEPUphysics: BEPUphysics v0.15.0: BEPUphysics v0.15.0!LiveChat Starter Kit: LCSK v1.1: This release contains couple of new features and bug fixes including: Features: Send chat transcript via email Operator can now invite visitor to chat (pro-active chat request) Bug Fixes: Operator management (Save and Delete) bug fixes Operator Console chat small fixesIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.Image.Viewer: 2011.2: Whats new for Image.Viewer 2011.2: New open from file New about dialog Minor Bug Fix's, improvements and speed upsIronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...XML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Release of the WPF version, most of the general issues have been resolved. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. Whe...PHP Manager for IIS: PHP Manager 1.1.2 for IIS 7: This is a localization release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus a few bug fixes (see change list for more details). Most importantly this release is translated into five languages: German - the translation is provided by Christian Graefe Dutch - the translation is provided by Harrie Verveer Turkish - the translation is provided by Yusuf Oztürk Japanese - the translation is provided by Kenichi Wakasa Russian - the translation is provid...TweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comOffice Web.UI: Version 2.4: After having lost all modifications done for 2.3. I finally did it again... Have a look at http://www.officewebui.com/change-log Also, the documentation continues to grow... http://www.officewebui.com/category/kb ThanksmyCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...patterns & practices : Composite Services: Composite Services Guidance - CTP2: Overview The Composite Services guidance (codename Reykjavik) provides best practices and capabilities for applying industry-known SOA design patterns when building robust, connected, service-oriented composite enterprise applications. These capabilities are implemented as a set of reusable components for analytic tracing, service virtualization, metadata centralization and versioning, and policy centralization as well as exception management, included in this release. Changes in this CTP ...Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...New ProjectsASPRazorWebGrid: ASPRazorWebGrid can be used as an alternative for the built-in WebGrid in ASP.NET MVC3 using Razor. Features: - Server-side sorting and paging - Choice between url grid sort/paging parameters or page postback - Clean pager layoutBert Automation Framework: Bert Automation Framework. Bert can be leveraged to create automated tests for Windows or Web based applications. Framework is developed in C# using Visual Studio 2010, Microsofts AutomationUI and/or WATIN and be used to create automated tests. CasinoBot - A C# IRC gambling bot: CasinoBot is an IRC bot which allows you to play eight games. You can play single- and multiplayer games like hangman or slot. Includes a currency system, multi-channel support and a huge wordlist. The bot is written in C# using the SmartIRC4Net library.Cheque Management: This is a Cheque management project that help people to manage their cheque.CityLife: CityLife is a silverlight gameEmpty Razor Generator: A stripped down single file generator powered by Razor. Similiar to other generators available with the exception that is an absolute minimal implmentation. It's easy to tweak and very flexible. Driven in part because I would love to see it on WPF/WP7/Silverlight.ExtendedWorkFlow: Umbraco Extended WorkFlow to add extra functionality to create workflow process maps that can execute assemblies via commands and that have set stages. All stages, action and commands are logged within the umbraco audit trail. States and Action are also locked to user or role.Fluent CodeDOM: A library which allows you to work with CodeDOM in a way that is similar to the way you write your code. The library allows you to write your generated code much faster and make it more readable. Also, the usage of this library is very intuitive.FT: wftGoogle Map control for ASP.NET MVC: The control wraps Google maps API simplifying the use of Google maps in ASP.NET MVC applications. Based on Telerik Extensions for ASP.NET MVC this control shows how easy is to extend the Telerik framawork by building your own controls for ASP.NET MVC.GooNews - A WP7 Application: A Windows Phone 7 application. iTimeTrack IssueTracker: The iTimeTrack IssueTracker project is a (the AGPL open-source) subset of of the iTimeTrack time and issue tracker app that you can customize for your organizations use.jcompare: jcomparejuwenxue: submit some free bookKinect Finger Paint: Kinect Finger Paint is a Kinect application build on the OpenNI framework. It is meant as a tutorial and playground for experimenting with the Kinect interface.MTA_XNA_11_B: hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hh hProgrammingPractice: VC++ Programming PracticeSimple Backlog for Visual Studio: An enhanced task list for VS 2010, designed to make it easier to manage a backlog of features. Written in C# and WPF. Released under the Apache 2.0 licence.Text Reader: Text reader allows you to enter a text and it will read it up for you using TTS in windows. It's developed in C#.TrainTraX: TrainTraX is a system for training and certifying users with simple multiple choice question tests.Windows Charting: A windows charting application that uses microsoft charting controls for visual studio 2008WPFReveil: WPFReveil is a digital clock control on WPF. This project is composed two elements. WPFReveil.App is the application. WPFReveil contains the control clock.

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • CodePlex Daily Summary for Friday, December 03, 2010

    CodePlex Daily Summary for Friday, December 03, 2010Popular ReleasesChronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Winware: Winware 3.0 (.Net 4.0): Winware 3.0 is base on .Net 4.0 with C#. Please open it with Visual Studio 2010. This release contains a lab web application.UltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...Virtu: Virtu 0.9.0: Source Requirements.NET Framework 4 Visual Studio 2010 or Visual Studio 2010 Express Silverlight 4 Tools for Visual Studio 2010 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 4 .NET Framework 4 XNA Framework 4SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...New Projects.Net MVC Dialog Authentication Starter: .Net MVC Dialog Authentication Starter is the basic .Net MVC application starter template that has been modified to render the Register and Logon functionality via a modal dialog. It is developed using .Net MVC 2, Jquery 1.4.1, and Jquery UI 1.8.6..Net SQL Generator: Generation tool for random queries in SQL in the Windows environment. Can produce random queries, tables, deletes, and updates as well as generate random data for a table. Structured to allow easy adaptation for other SQL implementations as well.10010dshjlahfajhflkjhkjhherkjhfkja: 10010dshjlahfajhflkjhkjhherkjhfkjaBanshee: MOSS 2007 utility to detect authored links and headings in navigation structure of a site collection which point to the subweb's root instead of the subweb's welcome page. It's developed in C# and requires MOSS 2007 SP2 or later.Counter Strike Live Level editor: What is CSLIVE? A web browser based online 2d Shooter. It is clone of Counter Strike. Game supports the multiplayer game over internet or lan. And its Free to play! open souce! This project is still in development. Official open beta tests will be ran every day 20:00 - 21:00 +2GTCRM 2011 Style Templates: CRM 2011 Style Templates.DarkTimes: Having many projects at the sames time ? need an easy way to count those times ? here is a windows phone 7 app for this ^^ --------------------- Vous avez plusieurs projets en même temps ? besoin d'un moyen simple de compter ces temps ? voici une appli windows phone 7 pour ca ^^DSN Export-o-Matic 3000: DSN Export-o-Matic 3000 is a Windows application which allows you to export ODBC DSN Registry settings to a .reg file which can then be imported on another computer. It is written in C#.NET using Visual C# 2010 Express Edition.DYSS Game and AGE Game Engine for XNA: Did You Slay Something? (Working title) and AGE Game Engine for XNA Developed by Lucas LoreggiaExeCryptorNetWrapper: .NET Wrapper for ExeCryptor product (serial number checking functionalities only)ffmpegYAG: ffmpegYAG is a GUI for the popular ffmpeg audio/video processing toolFluentScheduler: A task scheduler that uses fluent interface to configure schedules. Useful for running cron jobs/automated tasks from your application. General Purpose Hash Function Algorithms: The General Purpose Hash Functions Library has the following mix of additive and rotative general purpose string hashing algorithms. HD Web FileManager: Incercam sa facem un singur fisier asp.net care sa faca managementul fisierelor pe un server web.iConverter: iConverter ?????????????。 iConverter is a character transcoding encoding conversion tool.LHA Social Work: Source control host for http://lhasocialwork.org.Mega Puzzle WP7: Mega Puzzle its a Puzzle game in Silverlight that run in Windows Phone 7.It's developed in C#MTConnect Managed Agent SDK: The MTConnect Managed SDK provides an Agent object model to facilitate exposing MTConnect data from your machine tools.Nebula - Image Lock / Unlock Software: Nebula - Image ( Jpg / bmp ) file lock / unlock software designed for simply changing file extension, so that files will be visible on HDD but not unless you change the extension. Written in Perl + Compiled into exe.Pushing - A Sokoban like Puzzle game: Pushing is a C# port of a little hobby project I created some years ago in C++. It's a Sokoban like Puzzle game. The old C++ version was just a 2D/Pseudo 3D Game, the new C# version will be a real 3D Game.RESTController: Provides a base class implementation for a RESTful controller model. Provides common functionality for the basic controller actions of List, Show, New, Edit, and Delete. It's meant to remove as much of the redundant code for MVC controllers as possible.Scientific Calculator ZENO-5000: HTML 5, CSS 3, jQuery: Scientific Calculator ZENO-5000 is a lightweight web application (<40kb), utilizing latest HTML5/CSS3 features and client-side jQuery/Java scripting. Application does not include any graphic files. It is portable, capable of running in online/offline modes on PC/mobile devicesSCR-Airplane Autopilot System: Automatyczny pilot samolotu - projekt na zaliczenie przedmiotu Systemy Czasu RzeczywistegoScriptonite: A lightweight system for scripting gameplay events for games such as RPGs. After integrating Scriptonite into your project, you create scripted events using an incredibly simple scripting language. Intended for XNA, but should work anywhere.Shelf: Shelf is a .NET library of common extension methods.SQLite ADO.NET for Windows Phone: Windows Phone is missing SQLite. This project fixes that :) And does so by giving the community the interfaces we've grown to know and love; IDbConnection, IDbCommand, IDbTransaction and, IDbReader. Thanks so much to the pioneers before me who ported the c++ to c#!Tailf: Tailf is a C# implementation of the tail -f command available on unix/linux systems. Differently form other ports it does not lock the file in any way so it works even if other rename the file: this is expecially designed to works well with log4net rolling file appender.TeamBrain: TeamBrain helps project teams to centralise all knowledge about a project into a repository that is clean and a pleasure to use.TeamDev JQuery c# wrapper: C# jQuery Wrapper. Helps you to write correct jQuery functions in c# language. The code you write will be traduced into correct jQuery methods calls. VirtualEye: Virtual Eye...Win32 Registry Activity Monitor: A simple program written in Delphi that monitors registry activity on win32 systems. In short a registry key and class are statically added to the code, the program is then run and as changes are made by other applications to the key and all its sub keys the changes are logged anXamlCode: XamlCode provides support for embedding inline C# code directly in the xaml files to create simple value converters, value providers and validation rules. It's targeted mainly as a rapid WPF application development tool.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 25, 2011

    CodePlex Daily Summary for Tuesday, October 25, 2011Popular ReleasesScrum Task Board Card Creator: TaskCardCreator 2.5.0.0: What's New: UX improvement: Loading of work items is done in a worker thread Use memory, not the file system, when creating reports for better performance UI improvement: Better parent/child relation between the TeamProjectPicker and MainWindow Microsoft Visual Studio Scrum 1.0: Dedicated Impediment card added Supported Templates: Microsoft Visual Studio Scrum 1.0 Product Backlog Item, Task, Impediment, and Bug MSF for Agile Software Development v5.0 User Story, Task, and BugSubtitleTools: SubtitleTools 1.9: - Improved: Some DVD players need a mandatory UTF8-BOM (http://en.wikipedia.org/wiki/Byteordermark) at the beginning of the file to show RTL subtitles correctly.THE NVL Maker: The NVL Maker Ver 3.06: 3.06 ??? ??3.04,???CG MODE?? ??3.05—— ??????????????????????????????? ?????????config.tjs????????(??????????????????????????) ????config.tjs???,????????????,????????????Config.tjs。(???????,?????????,???KAGConfig??) ???????“????”??,??????????????????(Data) ??????????????????? ???????????,????????,??????????? ?fadeoutbgm?????????,???????????????,?????????? ????????,???“??????”??。(??????????macro_edu.ks??) ????????,????????A??????,???“??????”。 ?????????????,???????B??????,???“...Xomega Framework: Xomega.Framework 1.2: Release 1.2 of Xomega Framework refactors projects to allow generating assemblies for different target frameworks and profiles and allows deploying it as a NuGet package. It also includes some small fixes to the service and presentation layer classes.People's Note: People's Note 0.31: Added note tag editing. Changed note edit conflict resolution to keep the latest version. To install: copy the appropriate CAB file onto your WM device and run it.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.WiX Toolset: WiX v3.6 Beta: First beta release of WiX v3.6. The primary focus is on Burn but there are also many small bug fixes to the core toolset. For more information see: http://robmensching.com/blog/posts/2011/10/24/WiX-v3.6-Beta-releasedUmbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.New ProjectsApiDiff.Net: API Difference/Reporting project written in C# using the wonderful Cecil engine (http://www.mono-project.com/Cecil) to show differences between versions of a public API. Something like the unsupported MS tool "libcheck"Aspect Oriented Programming with .Net: Demo project using PostSharp for AOP in .Net.CCBBAA: CCBBAACdf.Iris.MiniProjetCalculateur: Application Client / Serveur calculateur en langage CCmjSiverlight: a silverlight libraryCSharpKaartSpel: Super Kaart Spel.Developer Team Article System Management: Developer Team Article System Management Makes it easier for Authors to publish their articles . It developed in VB.NET .DirectoryMonitor: DirectoryMonitorDNN Administrador de Tareas: Un Proyecto de codigo abierto Domek: Taki sobie domekEDXGraphics: This is Edward(Shiqiu) Liu's personal code repository. Projects here are primarily about Graphics, AI and Algorithms. For more introduction, please visit [url:www.edxgraphics.com] Enterprise SSIS Framework: The Enterprise SSIS Framework is a project intended to simplify the management of an organization’s administration and ETL assets by abstracting the coordination and control flow into metadata. This project consists of the four parts: - Core SSIS Packages: A set of packages that are responsible for workflow within the framework - SQL Server 2008 DB: Holds all the metadata necessary to drive the application - Administration App: A web-based front end to facilitate managing assets running ...FIM Task Sequencer "RunJob": Runjob is a simple batch controller for FIM (MIIS and ILM) for starting and monitoring the execution of FIM Run Profiles that allows for complex (looping and branching) sequences to be put together with a simple XML 'task' file. Key features include: * Works with MIIS / ILM / FIM * Optionally Connects to the SQL database to verify that run profiles exist * Can execute arbitrary tasks via a command line. * Based on the results of command line or previous run history can loop and jump ...Ham Bone Soup amateur radio software and electronic log: Ham Bone Soup is a open source software written for satisfying the needs of Amateur Radio operators. The central feature is an especially flexible logging system for acommodating contests and general purpose logs, but will grow to include many other vital Amateur radio features. It doesn't do anything yet, we are just geting started.H? th?ng qu?n lý chi tiêu - windows phone: H? th?ng qu?n lý chi tiêu - windows phoneJavascript Editor for SharePoint: Javascript Editor for SharePoint is a lightweight, in-browser editor that lets you quickly prototype and test custom javascript code in SharePointLegoWeb: Open source Web CMS base on ASP.NET Webparts + MARCXML metadata: LegoWeb is an open source web content management solution developed base on combination of ASP.NET 2.0 Webparts and MARCXML Metadata. It is very simple and very flexible. LuaXna: An simple engine that allow to devolop in LUA for XNA. It have logging feature and cycles.Manager Game: Student project for game developement course. Using XNA 4.0 for Windows platform.Metroed: A Metro version of the PhoneyTools (a WP7 toolkit) for use with C#/VB WinRT applications.My Masters Sample Project for Sharepoint 2010: The feature stabling example project which is provide to deploy custom master page to personel sites on Sharepoint 2010 The solution is anwering fallowing questions : * How to deploy a custom master page ? * How to customize a masterpage ? * How to attach custom master page to personal sites using stabling feature ? NDepend TFS 2010 integration: NDepend TFS 2010 integration provides build activities, a build report customization addin, and extends the TFS Warehouse in order to leverage NDepend quality statistics for your projects. Open Source Renderer: Open Source RendererPearTunes: Peartunes is a university project.Plugin Framework Web: Lighweight plugin framework for web applicationsQTP TFS Generic Test Integration: In Microsoft Visual Studio 2010 there is a Generic Test type that allows you to integrate with automation tools like QTP. I have created a pretty simple solution that allows you to use your existing QTP automation scripts within the Microsoft Visual Studio testing framework and here's a sample below. The key part of this solution is transforming HP's QTP format to Microsoft’s Generic Test format so that you can publish the results to TFS. The added benefit of this integration is that you c...RequiemDream: the project for manage workflow records reexecute to helping developers for debugging.Rift Addon Studio 2010: Rift AddOn Studio (AOS) is an open source development environment designed to bring a Visual Studio-like experience to building Rift AddOns. For more information on exactly what AOS does, check out the list of features below. Small Database Tools: This is a project for me to create small tools I created for database related functions.SQL Azure Membership, Profile, Role provider starter kit for MVC3 project: When you use out of box MVC 3 site template the Membership, Profile and Role provider DB is created as attached MDF file by the name AspNetDB.mdf. This project has needed steps to easily migrate this DB into SQL Azure. This is a complete MVC3 Starter site project and could be used for your next 'Big Thing' as soon as properly setup. Enjoy :)SQL Server BareMetal Hands-on-Labs: The SQL Server BareMetal Hands-on-Labs project allows you to build a SQL Server Hands-on-Lab Enviroment from scratch, using Windows Server 2008 R2 SP1 Hyper-V. State Theater Website: Building a website designed to provide information about live theater throughout a state. Basically, it a port of NJTheater.com from classic ASP to Asp.Net making it customizable to any state along the way.Static web generator: Zoltar let you generate your website (blog, personl website,etc) from a set of md files. Ukulele Chord Finder: Ukulele Chord FinderUser Profile Cleaner: This application allows you to manage via GUI or command line user profiles, removed the profiles no longer used by a number of days. The application allows you to set a list of profiles that can be excluded from deletion. Virtual Visit: This project have made the virtual visite for your siteWP7 Open source project collection by eLite: ??Windows Phone????????zebrawebservice: ??AWB,??OPS??

    Read the article

< Previous Page | 42 43 44 45 46 47 48  | Next Page >