Search Results

Search found 22246 results on 890 pages for 'microsoft expression'.

Page 500/890 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Java Spotlight Episode 88: HTML 5 and JavaFX 2 with Gerrit Grunwalt

    - by Roger Brinkley
    Interview with Gerrit Grundwalt on HTML 5 and JavaFX 2. Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Java FX 2.1.1 Documentation updated on the docs.oracle.com/javafx website. Lightview: JavaFX 2 real-time visualizer for Glassfish JavaFX Programmatic POJO Expression Bindings (Part 1 & 2) The Enterprise Side of JavaFX - Leverage the power of FX Markup Language to define the UI for enterprise applications Events June 26-28, Jazoon, Zurich, Switzerland Jun 27, Houston JUG July 5, Java Forum, Stuttgart, Germany Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewGerrit Grunwald is working as a software engineer at Canoo Engineering AG (Basel, Switzerland). He is responsible for visualizations of all kinds. His technical interests include Java desktop development and specifically the subareas - JavaFX, Java Swing and HTML5 controls.He's a decent frequent blogger (http://www.harmonic-code.org), founder and leader of the Java User Group in Muenster (Germany), where he's also living. He has been involved in the IT industry since 1996, when he began to study physics at the University of Applied Sciences Muenster (Germany). Mail Bag What’s Cool Tab Sweep

    Read the article

  • Disable pasting in a textbox using jQuery

    - by Michel Grootjans
    I had fun writing this one My current client asked me to allow users to paste text into textboxes/textareas, but that the pasted text should be cleaned from '<...>' tags. Here's what we came up with: $(":input").bind('paste', function(e) { var el = $(this); setTimeout(function() { var text = $(el).val(); $(el).val(text.replace(/<(.*?)>/gi, '')); }, 100); }) ; This is so simple, I'm amazed. The first part just binds a function to the paste operation applied to any input  declared on the page. $(":input").bind('paste', function(e) {...}); In the first line, I just capture the element. Then wait for 100ms setTimeout(function() {....}, 100); then get the actual value from the textbox, and replace it with a regular expression that basically means replace everything that looks like '<{0}>' with ''. gi at the end are regex arguments in javascript. /<(.*?)>/gi

    Read the article

  • Adventures in Windows 8: Placing items in a GridView with a ColumnSpan or RowSpan

    - by Laurent Bugnion
    Currently working on a Windows 8 app for an important client, I will be writing about small issues, tips and tricks, ideas and whatever occurs to me during the development and the integration of this app. When working with a GridView, it is quite common to use a VariableSizedWrapGrid as the ItemsPanel. This creates a nice flowing layout which will auto-adapt for various resolutions. This is ideal when you want to build views like the Windows 8 start menu. However immediately we notice that the Start menu allows to place items on one column (Smaller) or two columns (Larger). This switch happens through the AppBar. So how do we implement that in our app? Using ColumnSpan and RowSpan When you use a VariableSizedWrapGrid directly in your XAML, you can attach the VariableSizedWrapGrid.ColumnSpan and VariableSizedWrapGrid.RowSpan attached properties directly to an item to create the desired effect. For instance this code create this output (shown in Blend but it runs just the same): <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" Width="200" Orientation="Horizontal"> <Rectangle Fill="Purple" /> <Rectangle Fill="Orange" /> <Rectangle Fill="Yellow" VariableSizedWrapGrid.ColumnSpan="2" /> <Rectangle Fill="Red" VariableSizedWrapGrid.ColumnSpan="2" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Green" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Blue" /> <Rectangle Fill="LightGray" /> </VariableSizedWrapGrid> Using the VariableSizedWrapGrid as ItemsPanel When you use a GridView however, you typically bind the ItemsSource property to a collection, for example in a viewmodel. In that case, you want to be able to switch the ColumnSpan and RowSpan depending on properties on the item. I tried to find a way to bind the VariableSizedWrapGrid.ColumnSpan attached property on the GridView’s ItemContainerStyle template to an observable property on the item, but it didn’t work. Instead, I decided to use a StyleSelector to switch the GridViewItem’s style. Here’s how: First I added my two GridViews to my XAML as follows: <Page.Resources> <local:MainViewModel x:Key="Main" /> <DataTemplate x:Key="DataTemplate1"> <Grid Background="{Binding Brush}"> <TextBlock Text="{Binding BrushCode}" /> </Grid> </DataTemplate> </Page.Resources> <Page.DataContext> <Binding Source="{StaticResource Main}" /> </Page.DataContext> <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}" Margin="20"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> <GridView Grid.Column="1" ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> </Grid> The MainViewModel looks like this: public class MainViewModel { public IList<Item> Items { get; private set; } public MainViewModel() { Items = new List<Item> { new Item { Brush = new SolidColorBrush(Colors.Red) }, new Item { Brush = new SolidColorBrush(Colors.Blue) }, new Item { Brush = new SolidColorBrush(Colors.Green), }, // And more... }; } } As for the Item class, I am using an MVVM Light ObservableObject but you can use your own simple implementation of INotifyPropertyChanged of course: public class Item : ObservableObject { public const string ColSpanPropertyName = "ColSpan"; private int _colSpan = 1; public int ColSpan { get { return _colSpan; } set { Set(ColSpanPropertyName, ref _colSpan, value); } } public SolidColorBrush Brush { get; set; } public string BrushCode { get { return Brush.Color.ToString(); } } } Then I copied the GridViewItem’s style locally. To do this, I use Expression Blend’s functionality. It has the disadvantage to copy a large portion of XAML to your application, but the HUGE advantage to allow you to change the look and feel of your GridViewItem everywhere in the application. For example, you can change the selection chrome, the item’s alignments and many other properties. Actually everytime I use a ListBox, ListView or any other data control, I typically copy the item style to a resource dictionary in my application and I tweak it. Note that Blend for Windows 8 apps is automatically installed with every edition of Visual Studio 2012 (including Express) so you have no excuses anymore not to use Blend :) Open MainPage.xaml in Expression Blend by right clicking on the MainPage.xaml file in the Solution Explorer and selecting Open in Blend from the context menu. Note that the items do not look very nice! The reason is that the default ItemContainerStyle sets the content’s alignment to “Center” which I never quite understood. Seems to me that you rather want the content to be stretched, but anyway it is easy to change.   Right click on the GridView on the left and select Edit Additional Templates, Edit Generated Item Container (ItemContainerStyle), Edit a Copy. In the Create Style Resource dialog, enter the name “DefaultGridViewItemStyle”, select “Application” and press OK. Side note 1: You need to save in a global resource dictionary because later we will need to retrieve that Style from a global location. Side note 2": I would rather copy the style to an external resource dictionary that I link into the App.xaml file, but I want to keep things simple here. Blend switches in Template edit mode. The template you are editing now is inside the ItemContainerStyle and will govern the appearance of your items. This is where, for instance, the “checked” chrome is defined, and where you can alter it if you need to. Note that you can reuse this style for all your GridViews even if you use a different DataTemplate for your items. Makes sense? I probably need to think about writing another blog post dedicated to the ItemContainerStyle :) In the breadcrumb bar on top of the page, click on the style icon. The property we want to change now can be changed in the Style instead of the Template, which is a better idea. Blend is not in Style edit mode, as you can see in the Objects and Timeline pane. In the Properties pane, in the Search box, enter the word “content”. This will filter all the properties containing that partial string, including the two we are interested in: HorizontalContentAlignment and VerticalContentAlignment. Set these two values to “Stretch” instead of the default “Center”. Using the breadcrumb bar again, set the scope back to the Page (by clicking on the first crumb on the left). Notice how the items are now showing as squares in the first GridView. We will now use the same ItemContainerStyle for the second GridView. To do this, right click on the second GridView and select Edit Additional Templates, Edit Generate Item Container, Apply Resource, DefaultGridViewItemStyle. The page now looks nicer: And now for the ColumnSpan! So now, let’s change the ColumnSpan property. First, let’s define a new Style that inherits the ItemContainerStyle we created before. Make sure that you save everything in Blend by pressing Ctrl-Shift-S. Open App.xaml in Visual Studio. Below the newly created DefaultGridViewItemStyle resource, add the following style: <Style x:Key="WideGridViewItemStyle" TargetType="GridViewItem" BasedOn="{StaticResource DefaultGridViewItemStyle}"> <Setter Property="VariableSizedWrapGrid.ColumnSpan" Value="2" /> </Style> Add a new class to the project, and name it MainItemStyleSelector. Implement the class as follows: public class MainItemStyleSelector : StyleSelector { protected override Style SelectStyleCore(object item, DependencyObject container) { var i = (Item)item; if (i.ColSpan == 2) { return Application.Current.Resources["WideGridViewItemStyle"] as Style; } return Application.Current.Resources["DefaultGridViewItemStyle"] as Style; } } In MainPage.xaml, add a resource to the Page.Resources section: <local:MainItemStyleSelector x:Key="MainItemStyleSelector" /> In MainPage.xaml, replace the ItemContainerStyle property on the first GridView with the ItemContainerStyleSelector property, pointing to the StaticResource we just defined. <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top" ItemContainerStyleSelector="{StaticResource MainItemStyleSelector}"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> Do the same for the second GridView as well. Finally, in the MainViewModel, change the ColumnSpan property on the 3rd Item to 2. new Item { Brush = new SolidColorBrush(Colors.Green), ColSpan = 2 }, Running the application now creates the following image, which is what we wanted. Notice how the green item is now a “wide tile”. You can also experiment by creating different Styles, all inheriting the DefaultGridViewItemStyle and using different values of RowSpan for instance. This will allow you to create any layout you want, while leaving the heavy lifting of “flowing the layout” to the GridView control. What about changing these values dynamically? Of course as we can see in the Start menu, it would be nice to be able to change the ColumnSpan and maybe even the RowSpan values at runtime. Unfortunately at this time I have not found a good way to do that. I am investigating however and will make sure to post a follow up when I find what I am looking for!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • What is the difference between Callback<T> and Java 8's Supplier<T>?

    - by Dan Pantry
    I've been switching over to Java from C# after some recommendations from some over at CodeReview. So, when I was looking into LWJGL, one thing I remembered was that every call to Display must be executed on the same thread that the Display.create() method was invoked on. Remembering this, I whipped up a class that looks a bit like this. public class LwjglDisplayWindow implements DisplayWindow { private final static int TargetFramesPerSecond = 60; private final Scheduler _scheduler; public LwjglDisplayWindow(Scheduler displayScheduler, DisplayMode displayMode) throws LWJGLException { _scheduler = displayScheduler; Display.setDisplayMode(displayMode); Display.create(); } public void dispose() { Display.destroy(); } @Override public int getTargetFramesPerSecond() { return TargetFramesPerSecond; } @Override public Future<Boolean> isClosed() { return _scheduler.schedule(() -> Display.isCloseRequested()); } } While writing this class you'll notice that I created a method called isClosed() that returns a Future<Boolean>. This dispatches a function to my Scheduler interface (which is nothing more than a wrapper around an ScheduledExecutorService. While writing the schedule method on the Scheduler I noticed that I could either use a Supplier<T> argument or a Callable<T> argument to represent the function that is passed in. ScheduledExecutorService didn't contain an override for Supplier<T> but I noticed that the lambda expression () -> Display.isCloseRequested() is actually type compatible with both Callable<bool> and Supplier<bool>. My question is, is there a difference between those two, semantically or otherwise - and if so, what is it, so I can adhere to it?

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • Unrealscript splitting a string

    - by burntsugar
    Note, this is repost from stackoverflow - I have only just discovered this site :) I need to split a string in Unrealscript, in the same way that Java's split function works. For instance - return the string "foo" as an array of char. I have tried to use the SplitString function: array SplitString( string Source, optional string Delimiter=",", optional bool bCullEmpty ) Wrapper for splitting a string into an array of strings using a single expression. as found at http://udn.epicgames.com/Three/UnrealScriptFunctions.html but it returns the entire String. simulated function wordDraw() { local String inputString; inputString = "trolls"; local string whatwillitbe; local int b; local int x; local array<String> letterArray; letterArray = SplitString(inputString,, false); for (x = 0; x < letterArray.Length; x++) { whatwillitbe = letterArray[x]; `log('it will be '@whatwillitbe); b = letterarray.Length; `log('letterarray length is '@b); `log('letter number '@x); } } Output is: b returns: 1 whatwillitbe returns: trolls However I would like b to return 6 and whatwillitbe to return each character individually. I have had a few answers proposed, however, I would still like to properly understand how the SplitString function works. For instance, if the Delimiter parameter is optional, what does the function use as a delimiter by default?

    Read the article

  • Looking into the jQuery LazyLoad Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery LazyLoad Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here. You can have a look at the JQuery Overlays Plugin here . There are times when when I am asked to create a very long page with lots of images.My first thought is to enable paging on the proposed page. Imagine that we have 60 images on a page. There are performance concerns when we have so many images on a page. Paging can solve that problem if I am allowed to place only 5 images on a page.Sometimes the customer does not like the idea of the paging.Believe it or not some people find the idea of paging not attractive at all.In that case I need a way to only load the initial set of images and as the user scrolls down the page to load the rest.So as someone scrolls down new requests are made to the server and more images are fetched. I can accomplish that with the jQuery LazyLoad Plugin.This is just a plugin that delays loading of images in long web pages.The images that are outside of the viewport (visible part of web page) won't be loaded before the user scrolls to them. Using jQuery LazyLoad Plugin on long web pages containing many large images makes the page load faster. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5)<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>        <script type="text/javascript" src="jquery.lazyload.min.js" ></script></head>  <body>    <header>                <h1>Liverpool Legends</h1>    </header>        <div id="main">             <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />                <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p />          </div>            <footer>        <p>All Rights Reserved</p>      </footer>                    <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>     </body>  </html> This is a very simple markup. I have  added references to the JQuery library (current version is 1.8.3) and the JQuery LazyLoad Plugin. Firstly, I add two images         <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />  that will load immediately as soon as the page loads. Then I add the images that will not load unless they become active in the viewport. I have all my img tags pointing the src attribute towards a placeholder image. I’m using a blank 1×1 px grey image,loader.gif.The five images that will load as the user scrolls down the page follow.         <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p /> Then we need to rename the image src to point towards the proper image placeholder. The full image URL goes into the data-original attribute.The Javascript code that makes it all happen follows. We need to make a call to the JQuery LazyLoad Plugin. We add the script just before we close the body element.         <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>We can change the code above to incorporate some effects.          <script type="text/javascript">  $("img.LiverpoolImage").lazyload({    effect: "fadeIn"  });    </script> That is all I need to write to achieve lazy loading. It it true that you can do so much with less!!I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

  • Running Powershell from within SharePoint

    - by Norgean
    Just because something is a daft idea, doesn't mean it can't be done. We sometimes need to do some housekeeping - like delete old files or list items or… yes, well, whatever you use Powershell for in a SharePoint world. Or it could be that your solution has "issues" for which you have Powershell solutions, but not the budget to transform into proper bug fixes. So you create a "how to" for the ITPro guys. Idea: What if we keep the scripts in a list, and have SharePoint execute the scripts on demand? An announcements list (because of the multiline body field). Warning! Let us be clear. This list needs to be locked down; if somebody creates a malicious script and you run it, I cannot help you. First; we need to figure out how to start Powershell scripts from C#. Hit teh interwebs and the Googlie, and you may find jpmik's post: http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C. (Or MS' official answer at http://msdn.microsoft.com/en-us/library/ee706563(v=vs.85).aspx) public string RunPowershell(string powershellText, SPWeb web, string param1, string param2) { // Powershell ~= RunspaceFactory - i.e. Create a powershell context var runspace = RunspaceFactory.CreateRunspace(); var resultString = new StringBuilder(); try { // load the SharePoint snapin - Note: you cannot do this in the script itself (i.e. add-pssnapin etc does not work) PSSnapInException snapInError; runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError); runspace.Open(); // set a web variable. runspace.SessionStateProxy.SetVariable("webContext", web); // and some user defined parameters runspace.SessionStateProxy.SetVariable("param1", param1); runspace.SessionStateProxy.SetVariable("param2", param2); var pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(powershellText); // add a "return" variable pipeline.Commands.Add("Out-String"); // execute! var results = pipeline.Invoke(); // convert the script result into a single string foreach (PSObject obj in results) { resultString.AppendLine(obj.ToString()); } } finally { // close the runspace runspace.Close(); } // consider logging the result. Or something. return resultString.ToString(); } Ok. We've written some code. Let us test it. var runner = new PowershellRunner(); runner.RunPowershellScript(@" $web = Get-SPWeb 'http://server/web' # or $webContext $web.Title = $param1 $web.Update() $web.Dispose() ", null, "New title", "not used"); Next step: Connect the code to the list, or more specifically, have the code execute on one (or several) list items. As there are more options than readers, I'll leave this as an exercise for the reader. Some alternatives: Create a ribbon button that calls RunPowershell with the body of the selected itemsAdd a layout pageSpecify list item from query string (possibly coupled with content editor webpart with html that links directly to this page with querystring)WebpartListing with an "execute" columnList with multiselect and an execute button Etc!Now that you have the code for executing powershell scripts, you can easily expand this into a timer job, which executes scripts at regular intervals. But if the previous solution was dangerous, this is even worse - the scripts will usually be run with one of the admin accounts, and can do pretty much anything...One more thing... Note that as this is running "consoleless" calls to Write-Host will fail. Two solutions; remove all output, or check if the script is run in a console-window or not.  if ($host.Name -eq "ConsoleHost") { Write-Host 'If I agreed with you we'd both be wrong' }

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • XPath & XML EDI B2B

    - by PearlFactory
    GoodToGo :) Best XML Editor is Altova XMLSpy 2011 http://www.torrenthound.com/hash/bfdbf55baa4ca6f8e93464c9a42cbd66450bb950/torrent-info/Altova-XMLSpy-Enterprise-Edition-SP1-2011-v13-0-1-0-h33t-com-Full For whatever reason Piratebay has trojans and other nasties..Search in torrent.eu for Altova XMLSpy Enterprise Edition SP1 2011 v13.0.1.0 Also if you like the product purchase it in a Commercial Enviroment Any well structured/complex XML can be parsed @ the speed of light using XPATH querys and not the C# objects XPathNodeIterator and others etc ....Never do loops  or Genirics or whatever highlevel language technology. Use the power of XPATH i.e Will use a Simple (Do while) as an example. We could have many different techs all achieveing the same result Instead of   xmlNI2 = xmlNav.Select("/p:BookShop")         if (xmlNI2.Count != 0)            {                                 while ((xmlNI2.MoveNext()))              string aNode =xmlNI2.SelectSingleNode('Book', nsmgr); if (aNode =="The Book I am after")       Console.WriteLine("Found My Book);   This lengthy cumbersome task can be achieved with a simple XPATH query Console.WriteLine((xmlNavg.SelectSingleNode("/p:BookShop/Book[.='The Book I am aFter ']", nsmgr)).Value.ToString()); Use the power of the parser and eliminate the middleman C#/MSIL/JIT etc etc Get Started Fast and use the parser as Outlined 1) Open XML and goto Grid Mode 2) Select XPATH tab on the bottom viewer/window as shown  From here you get intellisense and can quickly learn how to navigate/find the data using XPATH A key component to Navigation with XPATH is to use the "../ " command . This basically says from where I am now go up 1 level . With Xpath all commands are cumalative. i.e you can search for a book title @ the 2nd level of the XML and from there traverse 15 layers to paragraphs or words on a page with expression validation occuring throughout this process etc  (So in essence you may have arrived @ a node within the XML and have met 15 conditions along the way ) Given 1-2 days with XmlSpy and XPATH you unlock a technology that is super fast and simple to use. XML is a core component to what lays under the hood of so many techs. So it is no wonder that you want to be able to goto  the atomic level to achieve the result you want Justin P.S For a long time I saw XML as slow and a bit boring but now converted

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Hyper-V for Developers Part 1 Internal Networks

    Over the last year, weve been working with Microsoft to build training and demo content for the next version of Office Communications Server code-named Microsoft Communications Server 14.  This involved building multi-server demo environments in Hyper-V, getting them running on demo servers which we took to TechEd, PDC, and other training events, and sometimes connecting the demo servers to the show networks at those events.  ITPro stuff that should scare the hell out of a developer! It can get ugly when I occasionally have to venture into ITPro land.  Lets leave it at that. Having gone through this process about 10 to 15 times in the last year, I finally have it down.  This blog series is my attempt to put all that knowledge in one place if anything, so I can find it somewhere when I need it again.  Ill start with the most simple scenario and then build on top of it in future blog posts. If youre an ITPro, please resist the urge to laugh at how trivial this is. Internal Hyper-V Networks Lets start simple.  An internal network is one that intended only for the virtual machines that are going to be on that network it enables them to communicate with each other. Create an Internal Network On your host machine, fire up the Hyper-V Manager and click the Virtual Network Manager in the Actions panel. Select Internal and leave all the other default values. Give the virtual network a name, and leave all the other default values. After the virtual network is created, open the Network and Sharing Center and click Change Adapter Settings to see the list of network connections. The only thing I recommend that you do is to give this connection a friendly label, e.g. Hyper-V Internal.  When you have multiple networks and virtual networks on the host machines, this helps group the networks so you can easily differentiate them from each other.  Otherwise, dont touch it, only bad things can happen. Connect the Virtual Machines to the Internal Network Im assuming that you have more than 1 virtual machine already configured in Hyper-V, for example a Domain Controller, and Exchange Server, and a SharePoint Server. What you need to do is basically plug in the network to the virtual machine.  In order to do this, the machine needs to have a virtual network adapter.  If the VM doesnt have a network adapter, open the VMs Settings and click Add Hardware in the left pane.  Choose the virtual network to which to bind the adapter to. If you already have a virtual network adapter on the VM, simply connect it to the virtual network. Assign IP Addresses to the Virtual Machines on the Internal Network Open the Network and Sharing Center on your VM, there should only be 1 network at this time.  Open the Properties of the connection, select Internet Protocol Version 4 (TCP/IPv4) and hit Properties. In this environment, Im assigning IP addresses as 192.168.0.xxx.  This particular VM has an IP address of 192.168.0.40 with a subnet mask of 255.255.255.0, and a DNS Server of 192.168.0.18.  DNS is running on the Domain Controller VM which has an IP address of 192.168.0.18. Repeat this process on every VM in your environment, obviously assigning a unique IP address to each.  In an environment with a domain controller, you should now be able to ping the machines from each other. What Next? After completing this process, heres what you still cannot do: Access the internet from any of the VMs Remote desktop to a VM from the host Remote desktop to a VM over the network In the next post, well take a look configuring an External network adapter on the virtual machines.  Well then build on top of that so that you can RDP into the VMs from the host machine and over the network.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • VBScript and Xpath excluding duplicates [closed]

    - by Malachi
    I am trying to pull names from an XML Document using a vbscript. XML Document structure <Aliases> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> ... </Aliases> the XML File might have 100 rows with the same name coming from several different CaseID's because for this part of my vbscript I am trying to pull all the different Names from all cases, but here is the issue, I don't want to return duplicates. is there a way to do this with an xPath Expression or should I try to do this with VBScript? EDIT I am pretty sure that I am going to have to do this with VBScript. Would it be Faster and more efficient to solve this issue in VBScript, xPath, or in populating the XML I am retrieving information from ( this might prove more difficult than the other two options ) I am also asking a Similar question on stackoverflow

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • A fix for the design time error in MVVM Light V4.1

    - by Laurent Bugnion
    For those of you who installed V4.1 of MVVM Light and created a project for Windows Phone 8, you will have noticed an error showing up in the design surface (either in Visual Studio designer, or in Expression Blend). The error says: “Could not load type ‘System.ComponentModel.INotifyPropertyChanging’ from assembly ‘mscorlib.extensions’” with additional information about version numbers. The error is caused by an incompatibility between versions of System.Windows.Interactivity. Because this assembly is strongly named, any version incompatibility is causing the kind of error shown here (for an interesting discussion on the strong naming issue, see this thread on Codeplex). I managed to resolve the issue for Windows Phone 8 and will publish a cleaned up installer next week. In the mean time, in order to allow you to continue development, please follow the steps: Download the new DLLs zip package (MVVMLight_V4_1_25_WP8). Right click on the Zip file and select Properties from the context menu. Press the “Unblock” button (if available) and then OK. Right click again on the zip package and select “Extract all…”. Select a known location for the new DLLs. Open the MVVM Light project with the design time error in Visual Studio 2012. Open the References folder in the Solution Explorer. Select the following DLLs: GalaSoft.MvvmLight.dll, GalaSoft.MvvmLight.Extras.dll, Microsoft.Practices.ServiceLocation.dll and System.Windows.Interactivity.dll. Press “delete” and confirm to remove the DLLs from your project. Right click on References and select Add Reference from the context menu. Browse to the folder with the new DLLs. Select the four new DLLs and press OK. Rebuild your application, and open it again in Blend or in the Visual Studio designer. The error should be gone now. In the next few days, as time allows, I will publish a new MSI containing a fixed version of the DLLs as well as a few other improvements. This quick fix should however allow you to continue working on your Windows Phone 8 projects in design mode too.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • XNA 2D Board game - trouble with the cursor

    - by Adorjan
    I just have started making a simple 2D board game using XNA, but I got stuck at the movement of the cursor. This is my problem: I have a 10x10 table on with I should use a cursor to navigate. I simply made that table with the spriteBatch.Draw() function because I couldn't do it on another way. So here is what I did with the cursor: public override void LoadContent() { ... mutato.Position = new Vector2(X, Y); //X=103, Y=107; mutato.Sebesseg = 45; ... mutato.Initialize(content.Load<Texture2D>("cursor"),mutato.Position,mutato.Sebesseg); ... } public override void HandleInput(InputState input) { if (input == null) throw new ArgumentNullException("input"); // Look up inputs for the active player profile. int playerIndex = (int)ControllingPlayer.Value; KeyboardState keyboardState = input.CurrentKeyboardStates[playerIndex]; if (input.IsPauseGame(ControllingPlayer) || gamePadDisconnected) { ScreenManager.AddScreen(new PauseMenuScreen(), ControllingPlayer); } else { // Otherwise move the player position. if (keyboardState.IsKeyDown(Keys.Down)) { Y = (int)mutato.Position.Y + mutato.Move; } if (keyboardState.IsKeyDown(Keys.Up)) { Y = (int)mutato.Position.Y - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Left)) { X = (int)mutato.Position.X - mutato.Move; } if (keyboardState.IsKeyDown(Keys.Right)) { X = (int)mutato.Position.X + mutato.Move; } } } public override void Draw(GameTime gameTime) { mutato.Draw(spriteBatch); } Here's the cursor's (mutato) class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace Battleship.Components { class Cursor { public Texture2D Cursortexture; public Vector2 Position; public int Move; public void Initialize(Texture2D texture, Vector2 position,int move) { Cursortexture = texture; Position = position; Move = move; } public void Update() { } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(Cursortexture, Position, Color.White); } } } And here is a part of the InputState class where I think I should change something: public bool IsNewKeyPress(Keys key, PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { if (controllingPlayer.HasValue) { // Read input from the specified player. playerIndex = controllingPlayer.Value; int i = (int)playerIndex; return (CurrentKeyboardStates[i].IsKeyDown(key) && LastKeyboardStates[i].IsKeyUp(key)); } } If I leave the movement operation like this it doesn't have any sense: X = (int)mutato.Position.X - mutato.Move; However if I modify it to this: X = (int)mutato.Position.X--; it moves smoothly. Instead of this I need to move the cursor by fields (45 pixels), but I don't have any idea how to manage it.

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • ASP.NET Webforms developers and web designers: how to interact?

    - by just_name
    I'm an ASP.NET Webforms developer, and I face some problems when I deal with designers. Designers always complain about the asp.net server controls. They'd rather just have an html file and create css files along with the required images to go with those. Sometimes, if the design phase is done in advance, I get html files with related css files, but then we face many problems integrating the design with the aspx files (sever controls an telerik controls ... etc). What I want to ask about is: How do I overcome these problems? The designers prefer php- and mvc developers because of the problems with .net server controls. I need to know how to interact with the designers in the correct way. Are there any tools or applications to provide the designers with the rendered (html page) of the .aspx pages? By that I mean the page in runtime rather than the aspx in Visual Studio. They do use Web Expression but they want the rendered page in html as well.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >