Search Results

Search found 4182 results on 168 pages for 'matt green'.

Page 154/168 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • Why keylistener is not working here?

    - by swift
    i have implemented keylistener interface and implemented all the needed methods but when i press the key nothing happens here, why? package swing; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridLayout; import java.awt.Point; import java.awt.RenderingHints; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import java.awt.image.BufferedImage; import javax.swing.BorderFactory; import javax.swing.BoxLayout; import javax.swing.ImageIcon; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLayeredPane; import javax.swing.JPanel; import javax.swing.JTextArea; class Paper extends JPanel implements MouseListener,MouseMotionListener,ActionListener,KeyListener { static BufferedImage image; String shape; Color color=Color.black; Point start; Point end; Point mp; Button elipse=new Button("elipse"); int x[]=new int[50]; int y[]=new int[50]; Button rectangle=new Button("rect"); Button line=new Button("line"); Button roundrect=new Button("roundrect"); Button polygon=new Button("poly"); Button text=new Button("text"); ImageIcon erasericon=new ImageIcon("images/eraser.gif"); JButton erase=new JButton(erasericon); JButton[] colourbutton=new JButton[9]; String selected; Point label; String key; int ex,ey;//eraser //DatagramSocket dataSocket; JButton button = new JButton("test"); JLayeredPane layerpane; Point p=new Point(); int w,h; public Paper() { JFrame frame=new JFrame("Whiteboard"); frame.setVisible(true); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(640, 480); frame.setBackground(Color.black); layerpane=frame.getLayeredPane(); setWidth(539,444); setBounds(69,0,555,444); layerpane.add(this,new Integer(2)); layerpane.add(this.addButtons(),new Integer(0)); setLayout(null); setOpaque(false); addMouseListener(this); addMouseMotionListener(this); setFocusable(true); addKeyListener(this); System.out.println(isFocusable()); setBorder(BorderFactory.createLineBorder(Color.black)); } public void paintComponent(Graphics g) { try { super.paintComponent(g); g.drawImage(image, 0, 0, this); Graphics2D g2 = (Graphics2D)g; if(color!=null) g2.setPaint(color); if(start!=null && end!=null) { if(selected==("elipse")) g2.drawOval(start.x, start.y,(end.x-start.x),(end.y-start.y)); else if(selected==("rect")) g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); else if(selected==("rrect")) g2.drawRoundRect(start.x, start.y, (end.x-start.x),(end.y-start.y),11,11); else if(selected==("line")) g2.drawLine(start.x,start.y,end.x,end.y); else if(selected==("poly")) g2.drawPolygon(x,y,2); } } catch(Exception e) {} } //Function to draw the shape on image public void draw() { Graphics2D g2 = image.createGraphics(); g2.setPaint(color); if(start!=null && end!=null) { if(selected=="line") g2.drawLine(start.x, start.y, end.x, end.y); else if(selected=="elipse") g2.drawOval(start.x, start.y, (end.x-start.x),(end.y-start.y)); else if(selected=="rect") g2.drawRect(start.x, start.y, (end.x-start.x),(end.y-start.y)); else if(selected==("rrect")) g2.drawRoundRect(start.x, start.y, (end.x-start.x),(end.y-start.y),11,11); else if(selected==("poly")) g2.drawPolygon(x,y,2); } if(label!=null) { JTextArea textarea=new JTextArea(); if(selected==("text")) { textarea.setBounds(label.x, label.y, 50, 50); textarea.setMaximumSize(new Dimension(100,100)); textarea.setBackground(new Color(237,237,237)); add(textarea); g2.drawString("key",label.x,label.y); } } start=null; repaint(); g2.dispose(); } public void text() { System.out.println(label); } //Function which provides the erase functionality public void erase() { Graphics2D pic=(Graphics2D) image.getGraphics(); Color erasecolor=new Color(237,237,237); pic.setPaint(erasecolor); if(start!=null) pic.fillRect(start.x, start.y, 10, 10); } //To set the size of the image public void setWidth(int x,int y) { System.out.println("("+x+","+y+")"); w=x; h=y; image = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); } //Function to add buttons into the panel, calling this function returns a panel public JPanel addButtons() { JPanel buttonpanel=new JPanel(); buttonpanel.setMaximumSize(new Dimension(70,70)); JPanel shape=new JPanel(); JPanel colourbox=new JPanel(); shape.setLayout(new GridLayout(4,2)); shape.setMaximumSize(new Dimension(70,140)); colourbox.setLayout(new GridLayout(3,3)); colourbox.setMaximumSize(new Dimension(70,70)); buttonpanel.setLayout(new BoxLayout(buttonpanel,BoxLayout.Y_AXIS)); elipse.addActionListener(this); elipse.setToolTipText("Elipse"); rectangle.addActionListener(this); rectangle.setToolTipText("Rectangle"); line.addActionListener( this); line.setToolTipText("Line"); erase.addActionListener(this); erase.setToolTipText("Eraser"); roundrect.addActionListener(this); roundrect.setToolTipText("Round rect"); polygon.addActionListener(this); polygon.setToolTipText("Polygon"); text.addActionListener(this); text.setToolTipText("Text"); shape.add(elipse); shape.add(rectangle); shape.add(line); shape.add(erase); shape.add(roundrect); shape.add(polygon); shape.add(text); buttonpanel.add(shape); for(int i=0;i<9;i++) { colourbutton[i]=new JButton(); colourbox.add(colourbutton[i]); if(i==0) colourbutton[0].setBackground(Color.black); else if(i==1) colourbutton[1].setBackground(Color.white); else if(i==2) colourbutton[2].setBackground(Color.red); else if(i==3) colourbutton[3].setBackground(Color.orange); else if(i==4) colourbutton[4].setBackground(Color.blue); else if(i==5) colourbutton[5].setBackground(Color.green); else if(i==6) colourbutton[6].setBackground(Color.pink); else if(i==7) colourbutton[7].setBackground(Color.magenta); else if(i==8) colourbutton[8].setBackground(Color.cyan); colourbutton[i].addActionListener(this); } buttonpanel.add(colourbox); buttonpanel.setBounds(0, 0, 70, 210); return buttonpanel; } public void mouseClicked(MouseEvent e) { if(selected=="text") { label=new Point(); label=e.getPoint(); draw(); } } @Override public void mouseEntered(MouseEvent arg0) { } public void mouseExited(MouseEvent arg0) { } public void mousePressed(MouseEvent e) { if(selected=="line"||selected=="erase"||selected=="text") { start=e.getPoint(); } else if(selected=="elipse"||selected=="rect"||selected=="rrect") { mp = e.getPoint(); } else if(selected=="poly") { x[0]=e.getX(); y[0]=e.getY(); } } public void mouseReleased(MouseEvent e) { if(start!=null) { if(selected=="line") { end=e.getPoint(); } else if(selected=="elipse"||selected=="rect"||selected=="rrect") { end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); } else if(selected=="poly") { x[1]=e.getX(); y[1]=e.getY(); } draw(); } } public void mouseDragged(MouseEvent e) { if(end==null) end = new Point(); if(start==null) start = new Point(); if(selected=="line") { end=e.getPoint(); } else if(selected=="erase") { start=e.getPoint(); erase(); } else if(selected=="elipse"||selected=="rect"||selected=="rrect") { start.x = Math.min(mp.x,e.getX()); start.y = Math.min(mp.y,e.getY()); end.x = Math.max(mp.x,e.getX()); end.y = Math.max(mp.y,e.getY()); } else if(selected=="poly") { x[1]=e.getX(); y[1]=e.getY(); } repaint(); } public void mouseMoved(MouseEvent arg0) {} public void actionPerformed(ActionEvent e) { if(e.getSource()==elipse) selected="elipse"; else if(e.getSource()==line) selected="line"; else if(e.getSource()==rectangle) selected="rect"; else if(e.getSource()==erase) { selected="erase"; System.out.println(selected); erase(); } else if(e.getSource()==roundrect) selected="rrect"; else if(e.getSource()==polygon) selected="poly"; else if(e.getSource()==text) selected="text"; if(e.getSource()==colourbutton[0]) color=Color.black; else if(e.getSource()==colourbutton[1]) color=Color.white; else if(e.getSource()==colourbutton[2]) color=Color.red; else if(e.getSource()==colourbutton[3]) color=Color.orange; else if(e.getSource()==colourbutton[4]) color=Color.blue; else if(e.getSource()==colourbutton[5]) color=Color.green; else if(e.getSource()==colourbutton[6]) color=Color.pink; else if(e.getSource()==colourbutton[7]) color=Color.magenta; else if(e.getSource()==colourbutton[8]) color=Color.cyan; } @Override public void keyPressed(KeyEvent e) { System.out.println("pressed"); } @Override public void keyReleased(KeyEvent e) { System.out.println("key released"); } @Override public void keyTyped(KeyEvent e) { System.out.println("Typed"); } public static void main(String[] a) { new Paper(); } } class Button extends JButton { String name; public Button(String name) { this.name=name; } public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D)g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); //g2.setStroke(new BasicStroke(1.2f)); if (name == "line") g.drawLine(5,5,30,30); if (name == "elipse") g.drawOval(5,7,25,20); if (name== "rect") g.drawRect(5,5,25,23); if (name== "roundrect") g.drawRoundRect(5,5,25,23,10,10); int a[]=new int[]{20,9,20,23,20}; int b[]=new int[]{9,23,25,20,9}; if (name== "poly") g.drawPolyline(a, b, 5); if (name== "text") g.drawString("Text",5, 22); } }

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • Blending the Sketchflow Action

    - by GeekAgilistMercenary
    Started a new Sketchflow Prototype in Expression Blend recently and documented each of the steps.  This blog entry covers some of those steps, which are the basic elements of any prototype.  I will have more information regarding design, prototype creation, and the process of the initial phases for development in the future.  For now, I hope you enjoy this short walk through.  Also, be sure to check out my last quick entry on Sketchflow. I started off with a Sketchflow Project, just like I did in my previous entry (more specifics in that entry about how to manipulate and build out the Sketchflow Map). Once I created the project I setup the following Sketchflow Map. The CoreNavigation is a ComponentScreen setup solely for the page navigation at the top of the screen.  The XAML markup in case you want to create a Component Screen with the same design is included below. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:pb="clr-namespace:Microsoft.Expression.Prototyping.Behavior;assembly=Microsoft.Expression.Prototyping.Interactivity" x:Class="RapidPrototypeSketchScreens.CoreNavigation" d:DesignWidth="624" d:DesignHeight="49" Height="49" Width="624">   <Grid x:Name="LayoutRoot"> <TextBlock HorizontalAlignment="Stretch" Margin="307,3,0,0" Style="{StaticResource TitleCenter-Sketch}" Text="Aütøchart Scorecards" TextWrapping="Wrap"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1"/> </i:EventTrigger> </i:Interaction.Triggers> </TextBlock> <Button HorizontalAlignment="Left" Margin="164,8,0,11" Style="{StaticResource Button-Sketch}" Width="144" Content="Scorecard"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_2"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button HorizontalAlignment="Left" Margin="8,8,0,11" Style="{StaticResource Button-Sketch}" Width="152" Content="Standard Reports"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_1"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> </Grid> </UserControl> Now that the CoreNavigation Component Screen is done I built out each of the others.  In each of those screens I included the CoreNavigation Screen (all those little green lines in the image) as the top navigation.  In order to do that, as I created each of the pages I would hover over the CoreNavigation Object in the Sketchflow Map.  When the utilities drawer (the small menu that pops down under a node when you hover over it) shows click on the third little icon and drag it onto the page node you want a navigation screen on. Once I created all the screens I setup the navigation by opening up each screen and right clicking on the objects that needed to point to somewhere else in the prototype. Once I was done with the main page, my Home Navigation Page, it looked something like this in the Expression Blend Designer. I fleshed out each of the additional screens.  Once I was done I wanted to try out the deployment package.  The way to deploy a Sketchflow Prototype is to merely click on File –> Package SketchFlow Project and a prompt will appear.  In the prompt enter what you want the package to be called. I like to see the files generated afterwards too, so I checked the box to see that.  When Expression Blend is done generating everything you’ll have a directory like the one shown below, with all the needed files for deployment. Now these files can be copied or moved to any location for viewing.  One can even copy them (such as via FTP) to a server location to share with others.  Once they are deployed and you run the "TestPage.html" the other features of the Sketchflow Package are available. In the image below I have tagged a few sections to show the Sketchflow Player Features.  To the top left is the navigation, which provides a clearly defined area of movement in a list.  To the center right is the actual prototype application.  I have placed lists of things and made edits.  On the left hand side is the highlight feature, which is available in the Feedback section of the lower left.  On the right hand list I underlined the Autochart with an orange marker, and marked out two list items with a red marker. In the lower left hand side in the Feedback section is also an area to type in your feedback.  This can be useful for time based feedback, when you post this somewhere and want people to provide subsequent follow up feedback. Overall lots of great features, that enable some fairly rapid prototyping with customers.  Once one is familiar with the steps and parts of this Sketchflow Prototype Capabilities it is easy to step through an application without even stopping.  It really is that easy.  So get hold of Expression Blend 3 and get ramped up on Sketchflow, it will pay off in the design phases to do so! Original Entry

    Read the article

  • CodePlex Daily Summary for Thursday, February 25, 2010

    CodePlex Daily Summary for Thursday, February 25, 2010New ProjectsAptusSoftware.Threading: AptusSoftware.Threading is a class library designed primarily to assist in the development of multi-threaded WinForm applications, although there i...AxiomGameDesigner: It is going to be a universal scene editor for Axiom 3D game engine. It is in pure C# and will be kept portable to MONO for compatibility with linu...Badger - Unity Productivity Extensions: A set of Microsoft Unity Extensions. Why Badger? Because I love badgers.Business & System Analysis Templates and Best Practices for Russian: http://saway.codeplex.com/Conectayas: Conectayas is an open source "Connect Four" alike game but transformable to "Tic-Tac-Toe" and to a lot of similar games that uses mouse. Written in...FastCode: .NET 3.5 Extensions set to increase coding speed.Hundiyas: Hundiyas is an open source "Battleship" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. This cross-platform and cro...Icelandic Online Banking: Icelandic Online Banking is project defining a web service interface for online banking.IE8 AddOns XML Creator: Application that helps on creating the xml files for IE8 Accelerators, Search Providers and the markup for Web Slices.iKnowledge: a asp.net mvc demoLearn ASP.NET MVC: Learn ASP.NET MVC is a project for the members of the Peer Learning group in Silicon Valley. It contains the SportsStore solution from the Pro ASP...Live at Education Meta Web-Service: Live at Education Meta Web-Service is intended to abstract from several technologies that are included in Live@edu set of services. This web-ser...Low level wave sound output for VB.NET: Low level sound output class for VB.NET using platform invocation services to call winmm.dllMailQ: MailQ makes it easier for developers to send mail messages from an application. The system sends mails based on a database queue system (store, se...Managed DXGI: Managed DXGI library is Fully managed wrapper writen on C# for DXGI 1.0 and 1.1 technology. It makes easier to support DXGI in managed application....Multivalue AutoComplete WinForms TextBox in C#: This project is a sample application that demonstrates how to create a multivalue WinForms textbox in C# using .NET Framework 3.5.Nifty CSharp Tools: Nifty CSharp Tools, will contain various tools and snippets. IRCBot, splashscreens, linq, world of warcraft log parsing, screenshot uploaders, twi...PHP MPQ: A port of StormLib to PHP for handling Blizzard MPQ files.RedDevils strategy - Project Hoshimi Programming Battle: Source Code of RedDevils strategy. Imagine Cup 2008 - Project Hoshimi Programming Battle.RNUNIT: rNunit is a distributed Nunit project. Many application these days are client-server application, distributed application and regular unit testing ...Samar Solution: Samar Solutions is a business system for office automation.Silverlight OOMRPG Game Engine: Silverlight OOMRPG Game EngineSimulator: GPSSimulatorSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit is a flexible Augmented Reality library for Silverlight with the aim to make real time Augmented Reality applications with Silverlight ...Spiral Architecture Driven Development (SADD) for Russian: Это русская версия сайта sadd.codeplex.comSQLSnapshotManager: Easily manage SQL Server database snapshots in a easy to use visual interface.Twilio with VB.NET MVC: Twilio with VB.NET MVC is a sample application for developing with Twilio's REST based telephony API. It includes an XML Schema of the TwiML respon...Ultra Speed Dial: UltraSpeedDial.com - Online Speed Dial Page.Visual HTML Editor justHTML: justHTML - is simle windows-application WYSIWYG editor that allow everyone - without any knowledge of HTML - to create and edit web-pages. It supp...WinMTR.NET: .NET Clone of the popular Windows clone of the popular Linux Matt's TracerouteWPF Dialogs: "WPF Dialogs" is a library for different Dialogs in WPF (e.g. FolderBrowseDialog, SaveFileDialog, OpenFileDialog etc.). These Dialogs are written i...WPFLogin: A small Login window in WPF and C#XNA PerformanceTimers: CPU Timers for Windows and Xbox360. Can track multiple threads, and presents output as a log on-screen.New ReleasesAptusSoftware.Threading: 2.0.0: First public release. This release is in production as part of several commercial applications and is stable. The source code download includes a...BizTalk Software Factory: BizTalk Software Factory v2.1: This is a service release for the BizTalk Software Factory for BizTalk Server 2009, containing so far: Fix for x64: the SN.EXE tool is now locate...Business & System Analysis Templates and Best Practices for Russian: R00 The Place reserver: Just to reserve the place Will be filled out soonChronos WPF: Chronos v1.0 Beta 2: Added a new SplashScreen Added a new Login View and implemented Log Off Added a new PasswordBoxHelper (http://www.codeproject.com/Articles/371...dotNetTips: dotNetTips.Utility 3.5 R2: This is a new release (version 3.5.0.3) compatible with .NET 3.5. Lots of new classes/features!! Requires SP1 if using the Entity Framework extensi...fleXdoc: template-based server-side document generator (docx): fleXdoc 1.0 beta 3: The third and final beta of fleXdoc. fleXdoc consists of a webservice and a (test)client for the service. Make sure you also download the testclien...FluentPS: FluentPS v1.0: - FluentPS is moved from ASMX to WCF interface of the Project Server Interface (PSI) - Impersonation changes to work in compliance with WCF interfa...FolderSize: FolderSize.Win32.1.0.4.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...iTuner - The iTunes Companion: iTuner 1.1.3707 Beta 3: As promised, the iTuner Automated Librarian is now available. This automatically cleans an entire album of dead tracks and duplicates as tracks ar...Live at Education Meta Web-Service: LAEMWS v 1.0 beta: Release Candidate for LAEMWS.Macaw Reusable Code Library: LanguageConfigurationSolution: This Solution helps developing a multi language publishing web siteManaged DXGI: Initial Release.: Base declaration of interfaces, most of them untested yet.Math.NET Numerics: 2010.2.24.667 Build: Latest alpha buildMiniTwitter: 1.08.1: MiniTwitter 1.08.1 更新内容 変更 インクリメンタル検索時には大文字小文字の区別をしないように変更 クライアント名の表示を本家にあわせて from から via に変更 修正 公式 RT 時にステータスが上に表示されたり二重に表示されるバグを修正 自分が自分へ返信...Multivalue AutoComplete WinForms TextBox in C#: 1.0 First public release: Multivalue autocomplete textbox control and host application in this release are released in a single Visual Studio 2008 projects. See my related b...NMock3: NMock3 - Beta3, .NET 3.5: This release has some exciting new features. Please start providing feedback on the tutorials. The first several are complete and the rest are no...nxAjax - an asp.net ajax library using jQuery: nxAjax v3 codeplex 7: nxAjax v3 codeplex 7 binary and test website. Bug Fixed: ajax:Form control Add: Drag and drop Rewritten: DragnDropManager DragPanel DropPan...Office Apps: 0.8.7: whats new? Document.Editor and Document.Viewer now supports FlowDocument (.xaml) files bug fix'sPDF Rider: PDF Rider 0.3: Application PrerequisitesMicrosoft Windows Operating Systems (XP (tested) - Vista - 7) Microsoft .NET Framework 3.5 runtime A PDF rendering sof...ShellLight: ShellLight 0.1.0.1 Src: Codeplex project released. This is only a preview of the product. Until the first final release there will be many improvements.Silverlight OOMRPG Game Engine: SilverlightGameTutorialSolution v1.01: Please visit my blog for Silverlight OOMROG Game Tutorial: http://www.cnblogs.com/Jax/archive/2010/02/24/1673053.html.Simple Savant: Simple Savant v0.4: Added support for full-text indexing (See Full-Text Indexing) Added support for attribute spanning and compression for property values larger tha...Spiral Architecture Driven Development (SADD) for Russian: R00: R00 to reserve site nameTeamReview - TFS Code Review: Release 1.1.3: Release Features New expanded product positioning for capturing any targeted coding work as a trackable, assignable, reportable Work Item for any r...Text Designer Outline Text Library: 10th minor release: Version 0.3.1 (10th minor release)Fixed the gradient brush being too big for the text, resulting in not much gradient shown in the text. Gradient...TFS Workflow Control: TeamExplorer and TSWA control 1.0 for TFS 2010 RC: This is a special version for TFS 2010 RC. Use the RC version of the power tools to modify the layout of your work items (http://visualstudiogaller...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.7) - VS2010 RC Support: This update adds support for Visual Studio 2010 RC in addition to Visual Studio 2008. Please note that Visual Studio 2010 Beta 2 is NOT supported a...Tumblen3: tumblen3 Version 25Feb2010: ready for Twitter's xAuthUMD文本编辑器: UMDEditor文本编辑器V2.1.0: 2.1.0 (2010-02-24) 增加查找章节内指定文本内容的功能 2.0.4 (2010-02-06) 章节内容框增加右键菜单,包含编辑文本的基本操作 ------------------------------------------------------- 执行 reg.bat ...VCC: Latest build, v2.1.30224.0: Automatic drop of latest buildVisual HTML Editor justHTML: Latest binary: Latest buid here. Executable and mshtml.dll included in this archive. Ready to use ;)Visual HTML Editor justHTML: Source code for version 2.5: Visual studio 2008 project with full source code.VOB2MKV: vob2mkv-1.0.2: The release vob2mkv-1.0.2 is a feature update of the VOB2MKV project. It now includes a DirectShow source filter, MKVSOURCE. A source filter allo...WinMTR.NET: V 1.0: V 1.0WPF Dialogs: Version 0.1.0: Version 0.1.0 FolderBrowseDialog is implementet for more information look here Version 0.1.0 (german: Version 0.1.0 - Deutsch).WPF Dialogs: Version 0.1.1: Version 0.1.1 Features FolderBrowseDialog was extended / FolderBrowseDialog - Deutsch wurde erweitertXNA PerformanceTimers: XNA PerformanceTimers 0.1: Initial release.Zeta Resource Editor: Release 2010-02-24: Added HTTP proxy server support.Most Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesOxiteMost Active ProjectsDinnerNow.netBlogEngine.NETRawrInfoServiceSLARToolkit - Silverlight Augmented Reality ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web ServicesRapid Entity Framework. (ORM). CTP 2Common Context Adapters

    Read the article

  • How to Animate Text and Objects in PowerPoint 2010

    - by DigitalGeekery
    Are you looking for an eye catching way to keep your audience interested in your PowerPoint presentations? Today we’ll take a look at how to add animation effects to objects in PowerPoint 2010. Select the object you wish to animate and then click the More button in the Animation group of the Animation tab.   Animations are grouped into four categories. Entrance effects, Exit effects, Emphasis effects, and Motion Paths. You can get a Live Preview of how the animation will look by hovering your mouse over an animation effect.   When you select a Motion Path, your object will move along the dashed path line as shown on the screen. (This path is not displayed in the final output) Certain aspects of the Motion Path effects are editable. When you apply a Motion Path animation to an object, you can select the path and drag the end to change the length or size of the path. The green marker along the motion path marks the beginning of the  path and the red marks the end. The effects can be rotated by clicking and the bar near the center of the effect.   You can display additional effects by choosing one of the options at the bottom. This will pop up a Change Effect window. If you have Preview Effect checked at the lower left you can preview the effects by single clicking.   Apply Multiple Animations to an Object Select the object and then click the Add Animation button to display the animation effects. Just as we did with the first effect, you can hover over to get a live preview. Click to apply the effect. The animation effects will happen in the order they are applied. Animation Pane You can view a list of the animations applied to a slide by opening the Animation Pane. Select the Animation Pane button from the Advanced Animation group to display the Animation Pane on the right. You’ll see that each animation effect in the animation pane has an assigned number to the left.    Timing Animation Effects You can change when your animation starts to play. By default it is On Click. To change it, select the effect in the Animation Pane and then choose one of the options from the Start dropdown list. With Previous starts at the same time as the previous animation and After Previous starts after the last animation. You can also edit the duration that the animations plays and also set a delay.   You can change the order in which the animation effects are applied by selecting the effect in the animation pane and clicking Move Earlier or Move Later from the Timing group on the Animation tab. Effect Options If the Effect Options button is available when your animation is selected, then that particular animation has some additional effect settings that can be configured. You can access the Effect Option by right-clicking on the the animation in the Animation Pane, or by selecting Effect Options on the ribbon.   The available options will vary by effect and not all animation effects will have Effect Options settings. In the example below, you can change the amount of spinning and whether the object will spin clockwise or counterclockwise.   Under Enhancements, you can add sound effects to your animation. When you’re finished click OK.   Animating Text Animating Text works the same as animating an object. Simply select your text box and choose an animation. Text does have some different Effect Options. By selecting a sequence, you decide whether the text appears as one object, all at once, or by paragraph. As is the case with objects, there will be different available Effect Options depending on the animation you choose. Some animations, such as the Fly In animation, will have directional options.   Testing Your Animations Click on the Preview button at any time to test how your animations look. You can also select the Play button on the Animation Pane. Conclusion Animation effects are a great way to focus audience attention on important points and hold viewers interest in your PowerPoint presentations. Another cool way to spice up your PPT 2010 presentations is to add video from the web. What tips do you guys have for making your PowerPoint presentations more interesting? Similar Articles Productive Geek Tips Center Pictures and Other Objects in Office 2007 & 2010Preview Before You Paste with Live Preview in Office 2010Embed True Type Fonts in Word and PowerPoint 2007 DocumentsHow to Add Video from the Web in PowerPoint 2010Add Artistic Effects to Your Pictures in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • CodePlex Daily Summary for Wednesday, April 21, 2010

    CodePlex Daily Summary for Wednesday, April 21, 2010New ProjectsA WPF ImageButton that uses pixel shader for dynamically change the image status: A WPF ImageButton that uses pixel shader for dynamically change the image status.CCLI SongSelect Library: A managed code library for reading and working with CCLI SongSelect files in C#, VB.NET or any other language that can use DLLs built with Managed ...code paste: A code paste with idea, insight and funny from web. All the paster rights belong the original author.CodeBlock: CodeBlock is CLI implementation which uses LLVM as its native code generator. CodeBlock provides LLVM binding for .NET which is written in C#, a...CSS 360 Planetary Calendar: This is the Planetary Calendar for UW Bothell -- CSS 360DNN 4 Friendly Url Modification: DNN 4.6.2.0 source code modification for Friendly URL support.Event Scavenger: "Event Scavenger" was created out of a need to manage and monitor multiple servers and have a global view of what's happening in the logs of multip...FastBinaryFileSearch: General Purpose Binary Search Implementation Of BoyerMooreHorspool Algorithmgotstudentathelaticprofile: Test project Grate: Grate process photos by color-dispersion-tree It's developed in C#GZK2010: GZK Project is used for sdo.gzk 2010JpAccountingBeta: JpAccountingBetaLog4Net Viewer: Log4Net Viewer is a log4net files parser and exposes them to webservices. You can then watch them on a rich interface. This code is .NET4 with an ...MarkLogic Toolkit for Excel: The MarkLogic Toolkit for Excel allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft E...MarkLogic Toolkit for PowerPoint: The MarkLogic Toolkit for PowerPoint allows you to quickly build content applications with MarkLogic Server that extend the functionality of Micros...MarkLogic Toolkit for Word: The MarkLogic Toolkit for Word allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft Wo...MvcContrib Portable Areas: This project hosts mvccontrib portable areas.OgmoXNA: OgmoXNA is an XNA 3.1 Game Library and set of Content Pipeline Extensions for Matt Thorson's multi-platform Ogmo Editor. Its goal is to provide ne...Pdf ebook seaerch engine and viewer: PDF Search Engine is a book search engine search on sites, forums, message boards for pdf files. You can find and download a tons of e-books but p...ResizeDragBehavior: This C# Silverlight 3 ResizeDragBehavior is a simpel implementation of resizing columns left, right, above or under a workingspace. It allows you ...Roguelike school project: A simple Rogue-like game made in c# for my school project. Uses Windows forms for GUI and ADO.NET + SQL Server CE for persistency.SharePoint Service Account Password Recovery Tool: A utility to recover SharePoint service account and application pool passwords.Smart Include - a powerful & easy way to handle your CSS and Javascript includes: Smart Include provides web developers with a very easy way to have all their css and javascript files compressed and/or minified, placed in a singl...sNPCedit: Perfect World NPC Editorstatusupdates: Generic status updatesTRXtoHTML: This is a command line utility to generate html report files from VSTS 2008 result files (trx). Usage: VSTSTestReport /q <input trx file> <outpu...WawaWuzi: 网页版五子棋,欢迎大家参与哦,呵呵。WPF Alphabet: WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis....WPF AutoCompleteBox for Chinese Spell: CSAutoBox is a type of WPF AutoCompleteBox for Chinese Spell in Input,Like Google,Bing.WpfCollections: WpfCollections help in WPF MVVM scenarios, resolving some of common problems and tasks: - delayed CurrentChange in ListCollectionView, - generate...XML Log Viewer in the Cloud: Silvelright 4 application hosted on Windows Azure. Upload any log file based on xml format. View log, search log, diff log, catalog etc.New ReleasesA Guide to Parallel Programming: Drop 3 - Guide Preface, Chapters 1, 2, 5, and code: This is Drop 3 with Guide Preface, Chapters 1, 2, 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta ...Artefact Animator: Artefact Animator - Silverlight 4 and WPF .Net 4: Artefact Animator Version 4.0.4.6 Silverlight 4 ArtefactSL.dll for both Debug and Release. WPF 4 Artefact.dll for both Debug and Release.ASP.NET Wiki Control: Release 1.2: Includes SyntaxHighlighter integration. The control will display the functionality if it detects that http://alexgorbatchev.com/wiki/SyntaxHighli...C# to VB.NET VB.NET To C# Code Convertor: CSharp To VB.Net Convertor VS2010 Support: !VS2010 Support Added To The Addon Visual Studio buildin VB.Net To C# , C# To VB.Net Convertor using NRefactor from icsharpcode's SharpDevelop. The...CodeBlock: LLVM - 2010-04-20: These are precompiled LLVM dynamic link libraries. One for AMD64 architecture and one for IA32. To use these DLL's you should copy them to corresp...crudwork library: crudwork 2.2.0.4: What's new: qapp (shorts for Query Analyzer Plus Plus) is a SQL query analyzer previously for SQLite only now works for all databases (SQL, Oracle...DiffPlex - a .NET Diff Generator: DiffPlex 1.1: Change listFixed performance bug caused by logging for debug build Added small performance fix in the core diff algorithm Package the release b...Event Scavenger: First release (version 3.0): This release does not include any installers yet as the system is a bit more complex than a simple MSI installer can handle. Follow the instruction...Extend SmallBasic: Teaching Extensions v.012: added archiving for screen shots (Tortoise.approve) ColorWheel exits if emptyFree Silverlight & WPF Chart Control - Visifire: Charts for Silverlight 4!: Hi, Visifire now works with Silverlight 4. Microsoft released Silverlight 4 last week. Some of the new features include more fluid animations, Web...IST435: Lab 5 - DotNetNuke Module Development: Lab 5 - DotNetNuke Module DevelopmentThis is the instructions for Lab 5 on. This lab must be completed in-class. It is based on your Lab 4.KEMET_API: Kemet API v0.2d: new platform with determiners and ideograms ... please consult the "release_note.txt" for more informations.MDT Scripts, Front Ends, Web Services, and Utilities for use with ConfigMgr/SCCM: PrettyGoodFrontEndClone (v1.0): This is a clone of the great PrettyGoodFrontEnd written by Johan Arwidmark that uses the Deployment Webservice as a backend so you don't need to ho...NMigrations: 1.0.0.3: CHG: upgraded solution/projects to Visual Studio 2010 FIX: removed precision/scale from MONEY data type (issue #8081) FIX: added support for binary...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.6: Minor release.OgmoXNA: OgmoXNA Alpha Binaries: Binaries Release build binaries for the Windows and Xbox 360 platforms. Includes the Content Pipeline Extensions needed to build your projects in ...Ox Game Engine for XNA: Release 70 - Fixes: Update in 2.2.3.2 Removed use of 'reflected' render state. May fix some render errors. Original Hi all! I fixed all of the major known problems...patterns & practices – Enterprise Library: Enterprise Library 5.0 - April 2010: Microsoft Enterprise Library 5.0 official binaries can be downloaded from MSDN. Please be patient as the bits get propagated through the download s...Pdf ebook seaerch engine and viewer: Codes PDf ebook: CodesPlay-kanaler (Windows Media Center Plug-in): Playkanaler 1.0.4: Playkanaler version 1.0.4 Viasatkanalerna kanske fungerar igen, tack vare AleksandarF. Pausa och spola fungerar inte ännu.PokeIn Comet Ajax Library: PokeIn v06 x64: Bug fix release of PokeIn x64 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPokeIn Comet Ajax Library: PokeIn v06 x86: Bug fix release of PokeIn x86 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPowerSlim - Acceptance Testing for Enterprise Applications: PowerSlim 0.2: We’re pleased to announce the PowerSlim 0.2. The main feature of this release is Windows Setup which installs all you need to start doing Acceptan...Rapidshare Episode Downloader: RED v0.8.5: This release fixes some bugs that mainly have to do with Next and Add Show functionality.Rawr: Rawr 2.3.15: - Improvements to Wowhead/Armory parsing. - Rawr.Mage: Fix for calculations being broken on 32bit OSes. - Rawr.Warlock: Lots more work on fleshin...ResizeDragBehavior: ResizeDragBehavior 1.0: First release of the ResizeDragBehavior. Also includes a sampleproject to see how this behavior can be implemented.RoTwee: RoTwee (11.0.0.0): 17316 Follow Visual Studio 2010/.NET Framework 4.0SharePoint Service Account Password Recovery Tool: 1.0: This is the first release of the password recovery toolSilverlightFTP: SilverlightFTP Beta RUS: SilverlightFTP with drag-n-drop support. Russian.SqlCe Viewer (SeasonStar Database Management): SqlCe Viewer(SSDM) 0.0.8.3: 1:Downgrade to .net framework 3.5 sp1 2:Fix some bugs 3:Refactor Mysql EntranceThe Ghost: DEL3SWE: DEL3SWETMap for VS2010: TMap for Visual Studio 2010: TMap for Visual Studio 2010Sogeti has developed a testing process template that integrates the TMap test approach with Visual Studio 2010 (VS2010)....TRXtoHTML: TRXtoHTML v1.1: Minor updateVisual Studio Find Results Window Tweak: Find Results Window Tweak: First stable release of the tool, which enables you to tweak the find results window.Web Service Software Factory: 15 Minute Walkthrough for WSSF2010: This walkthrough provides a very brief introduction for those who either do not have a lot of time for a full introduction, or those who are lookin...Web Service Software Factory: Hands On Lab - Building a Web Service (VS2010): This hands-on lab provides eight exercies to briefly introduce most of the experiences of building a Web service using the Service Factory 2010. Th...Web Service Software Factory: Web Service Software Factory 2010 Source Code: System Requirements • Microsoft Visual Studio 2010 (Premium, Professional or Ultimate Edition) • Guidance Automation Extensions 2010 • Visu...WPF Alphabet: Source Code plus Binaries: Compete C# and WPF source code available below. I have also included the binary for those that just want to run it.WPF AutoCompleteBox for Chinese Spell: CSAutoBox V1.0: This is CSAutoBox V1.0 Beta,if you have any questions,please email me.Most Popular ProjectsRawrWBFS ManagerSilverlight ToolkitAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active ProjectsRawrpatterns & practices – Enterprise LibraryBlogEngine.NETIonics Isapi Rewrite FilterFarseer Physics EnginePHPExcelTweetSharpCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog ModulePokeIn Comet Ajax Library

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • CodePlex Daily Summary for Monday, April 26, 2010

    CodePlex Daily Summary for Monday, April 26, 2010New Projects.Net 4.0 WPF Twitter Client: A .Net 4.0 Twitter client application built in WPF using PrismAop 权限管理 license授权: 授权管理 Aop应用asp.mvc example with ajax: asp.mvc example with ajaxBojinx: Flex 4 Framework and Management utilities IOC Container Flexible and powerful event management and handling Create your own Processors and Annotat...Browser Gallese: Il Browser Gallese serve per andare in internet quasi SICURO e GratiseaaCaletti: 2. Semester project @ ÅrhusEnki Char 2 BIN: A program that converts characters to binary and vice versaFsGame: A wrapper over xna to improve its usage from F#jouba Images Converter: Convert piles of images to all key formats at one go! Make quick adjustments - resize, rotate. Get your pictures ready to be printed or uploaded to...MVC2 RESTful Library: A RESTFul Library using ASP.NET MVC 2.0My Schedule Tool: Schedule can be used to create simple or complex schedules for executing tens. NAntDefineTasks: NAnt Define Tasks allows you to define NAnt tasks in terms of other NAnt tasks, instead of having to write any C# code. PowerShell Admin Modules: PAM supplies a number of PowerShell modules satisfying the needs of Windows administrators. By pulling together functions for adminsitering files a...Search Youtube Direct: Another project By URPROB This is a sample application that take search videos from youtube. The videos are shown with different attributes catagor...Simple Site Template: Simple Site Template 一个为了方便网站项目框架模板~Slaff's demo project: this is my demo projectSSIS Credit Card Number Validator 08 (CCNV08): Credit Card Number Validator 08 (CCNV08) is a Custom SSIS Data Flow Transformation Component for SQL Server 2008 that determines whether the given ...Su-Lekhan: Su-Lekhan is lite weight source code editor with refactoring support which is extensible to multiple languagesV-Data: V-Data es una herramienta de calidad de datos (Data Quality) que mejora considerablemente la calidad de la información disponible en cualquier sist...zy26: zy26 was here...New ReleasesActive Directory User Properties Change: Source Code v 1.0: Full Source Code + Database Creation SQL Script You need to change some values in web.config and DataAccess.vb. Hope you will get the most benefit!!Bojinx: Bojinx Core V4.5: V 4.5 Stable buildBrowser Gallese: Browser 1.0.0.9: Il Browser Gallese va su internet veloce,gratis e Sicuro dalla nascita Su questo sito ci sono i fle d'istallazione e il codice sorgente scritti con...CSS 360 Planetary Calendar: LCO: Documents for the LCO MilestoneGArphics: Backgrounds: GArphics examples converted into PNG-images with a resolution of 1680x1050.GArphics: Beta v0.8: Beta v0.8. Includes built-in examples and a lot of new settings.Helium Frog Animator: Helium Frog Flow Diagram: This file contains a flow diagram showing how the program functions. The image is in .bmp format.Highlighterr for Visual C++ 2010: Highlighterr for Visual C++ 2010 Test Release: Seems I'm updating this on a daily basis now, I keep finding things to add/fix. So stay tuned for updates! Current Version: 1.02 Now picks up chan...HTML Ruby: 6.22.2: Slightly improved performance on inserted ruby Cleaned up the code someiTuner - The iTunes Companion: iTuner 1.2.3767 Beta 3: Beta 3 is requires iTunes 9.1.0.79 or later A Librarian status panel showing active and queued Librarian scanners. This will be hidden behind the ...Jet Login Tool (JetLoginTool): Logout - 1.5.3765.28026: Changed: Will logout from Jet if the "Stop" button is hit Fixed: Bug where A notification is received that the engine has stopped, but it hasn't ...jouba Images Converter: Release: The first beta release for Images ConverterKeep Focused - an enhanced tool for Time Management using Pomodoro Technique: Release 0.3 Alpha: Release Notes (Release 0.3 Alpha) The alpha preview provides the basic functionality used by the Pomodoro Technique. Files 1. Keep Focused Exe...Mercurial to Team Foundation Server Work Item Hook: Version 0.2: Changes Include: Eliminates the need for the Suds package by hand-rolling its own soap implementation Ability to specify columns and computed col...Mews: Mews.Application V0.8: Installation InstuctionsNew Features16569 16571 Fixed Issues16668 Missing Features16567 Known Remaining IssuesLarge text-only articles require la...MiniTwitter: 1.12: MiniTwitter 1.12 更新内容 注意 このバージョンは開発中につき、正式リリースまでに複数回更新されることがあります。 また不安定かもしれませんので、新機能を試したいという方以外には 1.11 をお勧めします。 追加 List 機能に仮対応 タイムラインの振り分け条件として...Multiwfn: multiwfn1.3.1_binary: multiwfn1.3.1_binaryMultiwfn: multiwfn1.3.1_source: multiwfn1.3.1_sourceMVC2 RESTful Library: Source Code: VS2010 Source CodeMy Schedule Tool: MyScheduleTool 0.1: Initial version for MyScheduleTool. If you have any comments about the project, please let me know. My email is cmicmobile@gmail.comNestoria.NET: Nestoria.NET 0.8.1: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...OgmoXNA: OgmoXNA Binaries: Binary release libraries for OgmoXNA and OgmoXNA content pipeline extensions. Includes both Windows and Xbox360 binaries in separate directories.OgmoXNA: OgmoXNA Source: NOTE: THE ASSETS PROVIDED IN THE DEMO GAME DISTRIBUTED WITH THIS SOURCE ARE PROPERTY OF MATT THORSON, AND ARE NOT TO BE USED FOR ANY OTHER PROJECT...PokeIn Comet Ajax Library: PokeIn v07 x64: New FeaturesInstant Client Disconnected Detection Disable Server Push Technology OptionPokeIn Comet Ajax Library: PokeIn v07 x86: New FeaturesInstant Client Disconnected Detection Disable Server Push Technology OptionPowerShell Admin Modules: PAM 0.1: Version 0.1 includes share moduleRehost Image: 1.3.8: Added option to remove resolution/size text from the thumbnails of uploaded images in ImageShack. Fixed issue with FTP servers that give back jun...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.3: Same as v1.2 with the following changes: Source Code and WSP updated to SharePoint 2010 and Visual Studio 2010 RTM releases Fields seperated into...SSIS Credit Card Number Validator 08 (CCNV08): CCNV08-1004Apr-R1: Version Released in Apr 2010SSIS Twitter Suite: v2.0: Upgraded Twitter Task to SSIS 2008sTASKedit: sTASKedit v0.7 Alpha: Features:Import of v56 (CN 1.3.6 v101) and v79 (PWI 1.4.2 v312) tasks.data files Export to v56 (Client) and v55 (Server) Clone & Delete Tasks ...StyleCop for ReSharper: StyleCop for ReSharper 5.0.14724.1: StyleCop for ReSharper 5.0.14724.1 =============== StyleCop 4.3.3.0 ReSharper 5.0.1659.36 Visual Studio 2008 / Visual Studio 2010 Fixes === G...Windows Live ID SSO Luminis Integration: Version 1.2: This new version has been updated for accomodating the new release of the Windows Live SSO Toolkit v4.2.WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.11: Version: 1.0.0.11 (Milestone 11): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...XAML Code Snippets addin for Visual Studio 2010: Release for VS 2010 RTM: Release built for Visual Studio 2010 RTMXP-More: 1.0: Release Notes Improved VM folder recognition Added About window Added input validation and exception handling If exist, vud and vpcbackupfile...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitSilverlight ToolkitMicrosoft SQL Server Product Samples: Databasepatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrGMap.NET - Great Maps for Windows Forms & PresentationBlogEngine.NETParticle Plot PivotNB_Store - Free DotNetNuke Ecommerce Catalog ModuleDotNetZip LibraryFarseer Physics EngineN2 CMSpatterns & practices: Composite WPF and Silverlight

    Read the article

  • Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code

    - by dwahlin
    I had a great time attending and speaking at the Silverlight Firestarter event up in Redmond on December 2, 2010. In addition to getting a chance to hang out with a lot of cool people from Microsoft such as Scott Guthrie, John Papa, Tim Heuer, Brian Goldfarb, John Allwright, David Pugmire, Jesse Liberty, Jeff Handley, Yavor Georgiev, Jossef Goldberg, Mike Cook and many others, I also had a chance to chat with a lot of people attending the event and hear about what projects they’re working on which was awesome. If you didn’t get a chance to look through all of the new features coming in Silverlight 5 check out John Papa’s post on the subject. While at the Silverlight Firestarter event I gave a presentation on WCF RIA Services and wanted to get the code posted since several people have asked when it’d be available. The talk can be viewed by clicking the image below. Code from the talk follows as well as additional links. I had a few people ask about the green bracelet on my left hand since it looks like something you’d get from a waterpark. It was used to get us access down a little hall that led backstage and allowed us to go backstage during the event. I thought it looked kind of dorky but it was required to get through security. Sample Code from My WCF RIA Services Talk (To login to the 2 apps use “user” and “P@ssw0rd”. Make sure to do a rebuild of the projects in Visual Studio before running them.) View All Silverlight Firestarter Talks and Scott Guthrie’s Keynote WCF RIA Services SP1 Beta for Silverlight 4 WCF RIA Services Code Samples (including some SP1 samples) Improved binding support in EntitySet and EntityCollection with SP1 (Kyle McClellan’s Blog) Introducing an MVVM-Friendly DomainDataSource: The DomainCollectionView (Kyle McClellan’s Blog) I’ve had the chance to speak at a lot of conferences but never with as many cameras, streaming capabilities, people watching live and overall hype involved. Over 1000 people registered to attend the conference in person at the Microsoft campus and well over 15,000 to watch it through the live stream.  The event started for me on Tuesday afternoon with a flight up to Seattle from Phoenix. My flight was delayed 1 1/2 hours (I seem to be good at booking delayed flights) so I didn’t get up there until almost 8 PM. John Papa did a tech check at 9 PM that night and I was scheduled for 9:30 PM. We basically plugged in my laptop backstage (amazing number of servers, racks and audio devices back there) and made sure everything showed up properly on the projector and the machines recording the presentation. In addition to a dedicated show director, there were at least 5 tech people back stage and at least that many up in the booth running lights, audio, cameras, and other aspects of the show. I wish I would’ve taken a picture of the backstage setup since it was pretty massive – servers all over the place. I definitely gained a new appreciation for how much work goes into these types of events. Here’s what the room looked like right before my tech check– not real exciting at this point. That’s Yavor Georgiev (who spoke on WCF Services at the Firestarter) in the background. We had plenty of monitors to reference during the presentation. Two monitors for slides (right and left side) and a notes monitor. The 4th monitor showed the time and they’d type in notes to us as we talked (such as “You’re over time!” in my case since I went around 4 minutes over :-)). Wednesday morning I went back on campus at Microsoft and watched John Papa film a few Silverlight TV episodes with Dave Campbell and Ryan Plemons.   Next I had the chance to watch the dry run of the keynote with Scott Guthrie and John Papa. We were all blown away by the demos shown since they were even better than expected. Starting at 1 PM on Wednesday I went over to Building 35 and listened to Yavor Georgiev (WCF Services), Jaime Rodriguez (Windows Phone 7), Jesse Liberty (Data Binding) and Jossef Goldberg and Mike Cook (Silverlight Performance) give their different talks and we all shared feedback with each other which was a lot of fun. Jeff Handley from the RIA Services team came afterwards and listened to me give a dry run of my WCF RIA Services talk. He had some great feedback that I really appreciated getting. That night I hung out with John Papa and Ward Bell and listened to John walk through his keynote demos. I also got a sneak peak of the gift given to Dave Campbell for all his work with Silverlight Cream over the years. It’s a poster signed by all of the key people involved with Silverlight: Thursday morning I got up fairly early to get to the event center by 8 AM for speaker pictures. It was nice and quiet at that point although outside the room there was a huge line of people waiting to get in.     At around 8:30 AM everyone was let in and the main room was filled quickly. Two other overflow rooms in the Microsoft conference center (Building 33) were also filled to capacity. At around 9 AM Scott Guthrie kicked off the event and all the excitement started! From there it was all a blur but it was definitely a lot of fun. All of the sessions for the Silverlight Firestarter were recorded and can be watched here (including the keynote). Corey Schuman, John Papa and I also released 11 lab exercises and associated videos to help people get started with Silverlight. Definitely check them out if you’re interested in learning more! Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • What is the most efficient way to convert to binary and back in C#?

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • Recent Innovations to ILOM

    - by B.Koch
    by Josh Rosen If you are wondering how Oracle can make some of the most advanced, reliable, and fault tolerant servers on the market, look no further than Oracle Integrated Lights Out Manager or ILOM.  We build ILOM into every server we create, from Oracle x86 Systems such as X3-2 to the SPARC T-Series family. Oracle ILOM is an embedded service processor, but it's really more than that.  It's a computer within a computer.  It's smart, it's tightly integrated into all aspects of the server's operation, and it's a big reason why Oracle servers are used for some of the most mission-critical workloads out there. To understand the value of ILOM, there is no better place to start than its fault management capability.  We have taken the sophisticated fault management architecture from Solaris, developed and refined over a decade, and built it into each and every ILOM. ILOM detects a potential issue at its earliest stage, watching low-level sensors.   If the root cause of a problem is not clear from a single error reading, ILOM will look for other clues and combine multiple pieces of information to correctly identify a failing component. ILOM provides peace of mind. We tailor our fault management for each new server platform that we produce.  You can rest assured that it's always actively keeping the server healthy.  And if there is a problem, you can be confident it will let you know by sending you a notification by e-mail or trap. We also heard IT managers tell us they needed a Ph.D. in computer engineering to manage today's servers. It doesn't have to be that way.  Thanks to the latest innovations to Oracle ILOM, we present hardware inventory and status in way that makes sense – to anyone.  Green means everything is healthy and red means something is wrong.  When a component needs to be replaced a clear message indicates where the problem is and points you at a knowledge article about that problem.  It's that simple. Simpler management and simple interfaces mean reduced complexity and lower costs to manage.  And we know that's really important. ILOM does all this while also providing advanced service processor features you depend on for managing enterprise class systems.  You can remotely control the server power, interact with a virtual video console for the server, and mount media on the server remotely.  There is no need to spend money on a KVM switch to get this functionality. And when people hear how advanced ILOM is, they can't believe ILOM is free.  All features are enabled and included with each Oracle server that you buy.  There are no advanced licenses you need to purchase or features to unlock. Configuring ILOM has also never been easier.  It is now possible to configure almost all aspects of the server directly from ILOM.  This includes changing BIOS settings, persistently modifying boot order, and optimizing power settings -- all directly from ILOM. But Oracle's innovation does not stop with ILOM.  Oracle has engineered Oracle Enterprise Manager Ops Center to integrate directly with ILOM, providing centralized management across all of our servers. Ops Center will discover each of your Oracle servers over the network by searching for ILOMs.  When it finds one, it knows how to communicate with ILOM to monitoring and configure that server from application to disk. Since every server that Oracle produces, from x86 Systems to SPARC T-Series up and down the line, comes with Oracle ILOM, you can manage all Oracle servers in the same way.  And while all of our servers may have different components on the inside, each with their specialized functions, the way you integrate them and the way you monitor and manage them is exactly the same. Oracle ILOM is state-of-art.  If you are looking for a server that make systems management simple and is easy to integrate and maintain, check out the latest advances to Oracle ILOM. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • [GEEK SCHOOL] Network Security 3: Windows Defender and a Malware-Free System

    - by Ciprian Rusen
    In this second lesson we are going to talk about one of the most confusing security products that are bundled with Windows: Windows Defender. In the past, this product has had a bad reputation and for good reason – it was very limited in its capacity to protect your computer from real-world malware. However, the latest version included in Windows 8.x operating systems is much different than in the past and it provides real protection to its users. The nice thing about Windows Defender in its current incarnation, is that it protects your system from the start, so there are never gaps in coverage. We will start this lesson by explaining what Windows Defender is in Windows 7 and Vista versus what it is in Windows 8, and what product to use if you are using an earlier version. We next will explore how to use Windows Defender, how to improve its default settings, and how to deal with the alerts that it displays. As you will see, Windows Defender will have you using its list of quarantined items a lot more often than other security products. This is why we will explain in detail how to work with it and remove malware for good or restore those items that are only false alarms. Lastly, you will learn how to turn off Windows Defender if you no longer want to use it and you prefer a third-party security product in its place and then how to enable it back, if you have changed your mind about using it. Upon completion, you should have a thorough understanding of your system’s default anti-malware options, or how to protect your system expeditiously. What is Windows Defender? Unfortunately there is no one clear answer to this question because of the confusing way Microsoft has chosen to name its security products. Windows Defender is a different product, depending on the Windows operating system you are using. If you use Windows Vista or Windows 7, then Windows Defender is a security tool that protects your computer from spyware. This but one form of malware made out of tools and applications that monitor your movements on the Internet or the activities you make on your computer. Spyware tends to send the information that is collected to a remote server and it is later used in all kinds of malicious purposes, from displaying advertising you don’t want, to using your personal data, etc. However, there are many other types of malware on the Internet and this version of Windows Defender is not able to protect users from any of them. That’s why, if you are using Windows 7 or earlier, we strongly recommend that you disable Windows Defender and install a more complete security product like Microsoft Security Essentials, or third-party security products from specialized security vendors. If you use Windows 8.x operating systems, then Windows Defender is the same thing as Microsoft Security Essentials: a decent security product that protects your computer in-real time from viruses and spyware. The fact that this product protects your computer also from viruses, not just from spyware, makes a huge difference. If you don’t want to pay for security products, Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) are good alternatives. Windows Defender in Windows 8.x and Microsoft Security Essentials are the same product, only their name is different. In this lesson, we will use the Windows Defender version from Windows 8.x but our instructions apply also to Microsoft Security Essentials (MSE) in Windows 7 and Windows Vista. If you want to download Microsoft Security Essentials and try it out, we recommend you to use this page: Download Microsoft Security Essentials. There you will find both 32-bit and 64-bit editions of this product as well versions in multiple languages. How to Use and Configure Windows Defender Using Windows Defender (MSE) is very easy to use. To start, search for “defender” on the Windows 8.x Start screen and click or tap the “Windows Defender” search result. In Windows 7, search for “security” in the Start Menu search box and click “Microsoft Security Essentials”. Windows Defender has four tabs which give you access to the following tools and options: Home – here you can view the security status of your system. If everything is alright, then it will be colored in green. If there are some warnings to consider, then it will be colored in yellow, and if there are threats that must be dealt with, everything will be colored in red. On the right side of the “Home” tab you will find options for scanning your computer for viruses and spyware. On the bottom of the tab you will find information about when the last scan was performed and what type of scan it was. Update – here you will find information on whether this product is up-to-date. You will learn when it was last updated and the versions of the definitions it is using. You can also trigger a manual update. History – here you can access quarantined items, see which items you’ve allowed to run on your PC even if they were identified as malware by Windows Defender, and view a complete list with all the malicious items Windows Defender has detected on your PC. In order to access all these lists and work with them, you need to be signed in as an administrator. Settings – this is the tab where you can turn on the real-time protection service, exclude files, file types, processes, and locations from its scans as well as access a couple of more advanced settings. The only difference between Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) is that, in the “Settings” tab, Microsoft Security Essentials allows you to set when to run scheduled scans while Windows Defender lacks this option.

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >