Search Results

Search found 1622 results on 65 pages for 'branch'.

Page 54/65 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

  • Silverlight for Windows Embedded tutorial (step 4)

    - by Valter Minute
    I’m back with my Silverlight for Windows Embedded tutorial. Sorry for the long delay between step 3 and step 4, the MVP summit and some work related issue prevented me from working on the tutorial during the last weeks. In our first,  second and third tutorial steps we implemented some very simple applications, just to understand the basic structure of a Silverlight for Windows Embedded application, learn how to handle events and how to operate on images. In this third step our sample application will be slightly more complicated, to introduce two new topics: list boxes and custom control. We will also learn how to create controls at runtime. I choose to explain those topics together and provide a sample a bit more complicated than usual just to start to give the feeling of how a “real” Silverlight for Windows Embedded application is organized. As usual we can start using Expression Blend to define our main page. In this case we will have a listbox and a textblock. Here’s the XAML code: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ListDemo.Page" Width="640" Height="480" x:Name="ListPage" xmlns:ListDemo="clr-namespace:ListDemo">   <Grid x:Name="LayoutRoot" Background="White"> <ListBox Margin="19,57,19,66" x:Name="FileList" SelectionChanged="Filelist_SelectionChanged"/> <TextBlock Height="35" Margin="19,8,19,0" VerticalAlignment="Top" TextWrapping="Wrap" x:Name="CurrentDir" Text="TextBlock" FontSize="20"/> </Grid> </UserControl> In our listbox we will load a list of directories, starting from the filesystem root (there are no drives in Windows CE, the filesystem has a single root named “\”). When the user clicks on an item inside the list, the corresponding directory path will be displayed in the TextBlock object and the subdirectories of the selected branch will be shown inside the list. As you can see we declared an event handler for the SelectionChanged event of our listbox. We also used a different font size for the TextBlock, to make it more readable. XAML and Expression Blend allow you to customize your UI pretty heavily, experiment with the tools and discover how you can completely change the aspect of your application without changing a single line of code! Inside our ListBox we want to insert the directory presenting a nice icon and their name, just like you are used to see them inside Windows 7 file explorer, for example. To get this we will define a user control. This is a custom object that will behave like “regular” Silverlight for Windows Embedded objects inside our application. First of all we have to define the look of our custom control, named DirectoryItem, using XAML: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="ListDemo.DirectoryItem" Width="500" Height="80">   <StackPanel x:Name="LayoutRoot" Orientation="Horizontal"> <Canvas Width="31.6667" Height="45.9583" Margin="10,10,10,10" RenderTransformOrigin="0.5,0.5"> <Canvas.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform/> <RotateTransform Angle="-31.27"/> <TranslateTransform/> </TransformGroup> </Canvas.RenderTransform> <Rectangle Width="31.6667" Height="45.8414" Canvas.Left="0" Canvas.Top="0.116943" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.569519" Canvas.Top="1.05249" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142632,0.753441" EndPoint="1.01886,0.753441"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142632" CenterY="0.753441" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142632" CenterY="0.753441" Angle="-35.3437"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="2.28036" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="29.8441" Height="43.1517" Canvas.Left="0.455627" Canvas.Top="1.34485" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3128" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FFCDCDCD" Offset="0.0833333"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="26.4269" Height="45.8414" Canvas.Left="0.227798" Canvas.Top="0" Stretch="Fill"> <Rectangle.Fill> <LinearGradientBrush StartPoint="0.142631,0.75344" EndPoint="1.01886,0.75344"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <SkewTransform CenterX="0.142631" CenterY="0.75344" AngleX="19.3127" AngleY="0"/> <RotateTransform CenterX="0.142631" CenterY="0.75344" Angle="-35.3436"/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF7B6802" Offset="0"/> <GradientStop Color="#FFF3D42C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <Rectangle Width="1.25301" Height="45.8414" Canvas.Left="1.70862" Canvas.Top="0.116943" Stretch="Fill" Fill="#FFEBFF07"/> </Canvas> <TextBlock Height="80" x:Name="Name" Width="448" TextWrapping="Wrap" VerticalAlignment="Center" FontSize="24" Text="Directory"/> </StackPanel> </UserControl> As you can see, this XAML contains many graphic elements. Those elements are used to design the folder icon. The original drawing has been designed in Expression Design and then exported as XAML. In Silverlight for Windows Embedded you can use vector images. This means that your images will look good even when scaled or rotated. In our DirectoryItem custom control we have a TextBlock named Name, that will be used to display….(suspense)…. the directory name (I’m too lazy to invent fancy names for controls, and using “boring” intuitive names will make code more readable, I hope!). Now that we have some XAML code, we may execute XAML2CPP to generate part of the aplication code for us. We should then add references to our XAML2CPP generated resource file and include in our code and add a reference to the XAML runtime library to our sources file (you can follow the instruction of the first tutorial step to do that), To generate the code used in this tutorial you need XAML2CPP ver 1.0.1.0, that is downloadable here: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2010/03/08/xaml2cpp-1.0.1.0.aspx We can now create our usual simple Win32 application inside Platform Builder, using the same step described in the first chapter of this tutorial (http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2009/10/01/silverlight-for-embedded-tutorial.aspx). We can declare a class for our main page, deriving it from the template that XAML2CPP generated for us: class ListPage : public TListPage<ListPage> { ... } We will see the ListPage class code in a short time, but before we will see the code of our DirectoryItem user control. This object will be used to populate our list, one item for each directory. To declare a user control things are a bit more complicated (but also in this case XAML2CPP will write most of the “boilerplate” code for use. To interact with a user control you should declare an interface. An interface defines the functions of a user control that can be called inside the application code. Our custom control is currently quite simple and we just need some member functions to store and retrieve a full pathname inside our control. The control will display just the last part of the path inside the control. An interface is declared as a C++ class that has only abstract virtual members. It should also have an UUID associated with it. UUID means Universal Unique IDentifier and it’s a 128 bit number that will identify our interface without the need of specifying its fully qualified name. UUIDs are used to identify COM interfaces and, as we discovered in chapter one, Silverlight for Windows Embedded is based on COM or, at least, provides a COM-like Application Programming Interface (API). Here’s the declaration of the DirectoryItem interface: class __declspec(novtable,uuid("{D38C66E5-2725-4111-B422-D75B32AA8702}")) IDirectoryItem : public IXRCustomUserControl { public:   virtual HRESULT SetFullPath(BSTR fullpath) = 0; virtual HRESULT GetFullPath(BSTR* retval) = 0; }; The interface is derived from IXRCustomControl, this will allow us to add our object to a XAML tree. It declares the two functions needed to set and get the full path, but don’t implement them. Implementation will be done inside the control class. The interface only defines the functions of our control class that are accessible from the outside. It’s a sort of “contract” between our control and the applications that will use it. We must support what’s inside the contract and the application code should know nothing else about our own control. To reference our interface we will use the UUID, to make code more readable we can declare a #define in this way: #define IID_IDirectoryItem __uuidof(IDirectoryItem) Silverlight for Windows Embedded objects (like COM objects) use a reference counting mechanism to handle object destruction. Every time you store a pointer to an object you should call its AddRef function and every time you no longer need that pointer you should call Release. The object keeps an internal counter, incremented for each AddRef and decremented on Release. When the counter reaches 0, the object is destroyed. Managing reference counting in our code can be quite complicated and, since we are lazy (I am, at least!), we will use a great feature of Silverlight for Windows Embedded: smart pointers.A smart pointer can be connected to a Silverlight for Windows Embedded object and manages its reference counting. To declare a smart pointer we must use the XRPtr template: typedef XRPtr<IDirectoryItem> IDirectoryItemPtr; Now that we have defined our interface, it’s time to implement our user control class. XAML2CPP has implemented a class for us, and we have only to derive our class from it, defining the main class and interface of our new custom control: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { ... } XAML2CPP has generated some code for us to support the user control, we don’t have to mind too much about that code, since it will be generated (or written by hand, if you like) always in the same way, for every user control. But knowing how does this works “under the hood” is still useful to understand the architecture of Silverlight for Windows Embedded. Our base class declaration is a bit more complex than the one we used for a simple page in the previous chapters: template <class A,class B> class DirectoryItemUserControlRegister : public XRCustomUserControlImpl<A,B>,public TDirectoryItem<A,XAML2CPPUserControl> { ... } This class derives from the XAML2CPP generated template class, like the ListPage class, but it uses XAML2CPPUserControl for the implementation of some features. This class shares the same ancestor of XAML2CPPPage (base class for “regular” XAML pages), XAML2CPPBase, implements binding of member variables and event handlers but, instead of loading and creating its own XAML tree, it attaches to an existing one. The XAML tree (and UI) of our custom control is created and loaded by the XRCustomUserControlImpl class. This class is part of the Silverlight for Windows Embedded framework and implements most of the functions needed to build-up a custom control in Silverlight (the guys that developed Silverlight for Windows Embedded seem to care about lazy programmers!). We have just to initialize it, providing our class (DirectoryItem) and interface (IDirectoryItem). Our user control class has also a static member: protected:   static HINSTANCE hInstance; This is used to store the HINSTANCE of the modules that contain our user control class. I don’t like this implementation, but I can’t find a better one, so if somebody has good ideas about how to handle the HINSTANCE object, I’ll be happy to hear suggestions! It also implements two static members required by XRCustomUserControlImpl. The first one is used to load the XAML UI of our custom control: static HRESULT GetXamlSource(XRXamlSource* pXamlSource) { pXamlSource->SetResource(hInstance,TEXT("XAML"),IDR_XAML_DirectoryItem); return S_OK; }   It initializes a XRXamlSource object, connecting it to the XAML resource that XAML2CPP has included in our resource script. The other method is used to register our custom control, allowing Silverlight for Windows Embedded to create it when it load some XAML or when an application creates a new control at runtime (more about this later): static HRESULT Register() { return XRCustomUserControlImpl<A,B>::Register(__uuidof(B), L"DirectoryItem", L"clr-namespace:DirectoryItemNamespace"); } To register our control we should provide its interface UUID, the name of the corresponding element in the XAML tree and its current namespace (namespaces compatible with Silverlight must use the “clr-namespace” prefix. We may also register additional properties for our objects, allowing them to be loaded and saved inside XAML. In this case we have no permanent properties and the Register method will just register our control. An additional static method is implemented to allow easy registration of our custom control inside our application WinMain function: static HRESULT RegisterUserControl(HINSTANCE hInstance) { DirectoryItemUserControlRegister::hInstance=hInstance; return DirectoryItemUserControlRegister<A,B>::Register(); } Now our control is registered and we will be able to create it using the Silverlight for Windows Embedded runtime functions. But we need to bind our members and event handlers to have them available like we are used to do for other XAML2CPP generated objects. To bind events and members we need to implement the On_Loaded function: virtual HRESULT OnLoaded(__in IXRDependencyObject* pRoot) { HRESULT retcode; IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; return ((A*)this)->Init(pRoot,hInstance,app); } This function will call the XAML2CPPUserControl::Init member that will connect the “root” member with the XAML sub tree that has been created for our control and then calls BindObjects and BindEvents to bind members and events to our code. Now we can go back to our application code (the code that you’ll have to actually write) to see the contents of our DirectoryItem class: class DirectoryItem : public DirectoryItemUserControlRegister<DirectoryItem,IDirectoryItem> { protected:   WCHAR fullpath[_MAX_PATH+1];   public:   DirectoryItem() { *fullpath=0; }   virtual HRESULT SetFullPath(BSTR fullpath) { wcscpy_s(this->fullpath,fullpath);   WCHAR* p=fullpath;   for(WCHAR*q=wcsstr(p,L"\\");q;p=q+1,q=wcsstr(p,L"\\")) ;   Name->SetText(p); return S_OK; }   virtual HRESULT GetFullPath(BSTR* retval) { *retval=SysAllocString(fullpath); return S_OK; } }; It’s pretty easy and contains a fullpath member (used to store that path of the directory connected with the user control) and the implementation of the two interface members that can be used to set and retrieve the path. The SetFullPath member parses the full path and displays just the last branch directory name inside the “Name” TextBlock object. As you can see, implementing a user control in Silverlight for Windows Embedded is not too complex and using XAML also for the UI of the control allows us to re-use the same mechanisms that we learnt and used in the previous steps of our tutorial. Now let’s see how the main page is managed by the ListPage class. class ListPage : public TListPage<ListPage> { protected:   // current path TCHAR curpath[_MAX_PATH+1]; It has a member named “curpath” that is used to store the current directory. It’s initialized inside the constructor: ListPage() { *curpath=0; } And it’s value is displayed inside the “CurrentDir” TextBlock inside the initialization function: virtual HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TListPage<ListPage>::Init(hInstance,app))) return retcode;   CurrentDir->SetText(L"\\"); return S_OK; } The FillFileList function is used to enumerate subdirectories of the current dir and add entries for each one inside the list box that fills most of the client area of our main page: HRESULT FillFileList() { HRESULT retcode; IXRItemCollectionPtr items; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode; // retrieves the items contained in the listbox if (FAILED(retcode=FileList->GetItems(&items))) return retcode;   // clears the list if (FAILED(retcode=items->Clear())) return retcode;   // enumerates files and directory in the current path WCHAR filemask[_MAX_PATH+1];   wcscpy_s(filemask,curpath); wcscat_s(filemask,L"\\*.*");   WIN32_FIND_DATA finddata; HANDLE findhandle;   findhandle=FindFirstFile(filemask,&finddata);   // the directory is empty? if (findhandle==INVALID_HANDLE_VALUE) return S_OK;   do { if (finddata.dwFileAttributes&=FILE_ATTRIBUTE_DIRECTORY) { IXRListBoxItemPtr listboxitem;   // add a new item to the listbox if (FAILED(retcode=app->CreateObject(IID_IXRListBoxItem,&listboxitem))) { FindClose(findhandle); return retcode; }   if (FAILED(retcode=items->Add(listboxitem,NULL))) { FindClose(findhandle); return retcode; }   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=app->CreateObject(IID_IDirectoryItem,&directoryitem))) { FindClose(findhandle); return retcode; }   WCHAR fullpath[_MAX_PATH+1];   wcscpy_s(fullpath,curpath); wcscat_s(fullpath,L"\\"); wcscat_s(fullpath,finddata.cFileName);   if (FAILED(retcode=directoryitem->SetFullPath(fullpath))) { FindClose(findhandle); return retcode; }   XAML2CPPXRValue value((IXRDependencyObject*)directoryitem);   if (FAILED(retcode=listboxitem->SetContent(&value))) { FindClose(findhandle); return retcode; } } } while (FindNextFile(findhandle,&finddata));   FindClose(findhandle); return S_OK; } This functions retrieve a pointer to the collection of the items contained in the directory listbox. The IXRItemCollection interface is used by listboxes and comboboxes and allow you to clear the list (using Clear(), as our function does at the beginning) and change its contents by adding and removing elements. This function uses the FindFirstFile/FindNextFile functions to enumerate all the objects inside our current directory and for each subdirectory creates a IXRListBoxItem object. You can insert any kind of control inside a list box, you don’t need a IXRListBoxItem, but using it will allow you to handle the selected state of an item, highlighting it inside the list. The function creates a list box item using the CreateObject function of XRApplication. The same function is then used to create an instance of our custom control. The function returns a pointer to the control IDirectoryItem interface and we can use it to store the directory full path inside the object and add it as content of the IXRListBox item object, adding it to the listbox contents. The listbox generates an event (SelectionChanged) each time the user clicks on one of the items contained in the listbox. We implement an event handler for that event and use it to change our current directory and repopulate the listbox. The current directory full path will be displayed in the TextBlock: HRESULT Filelist_SelectionChanged(IXRDependencyObject* source,XRSelectionChangedEventArgs* args) { HRESULT retcode;   IXRListBoxItemPtr listboxitem;   if (!args->pAddedItem) return S_OK;   if (FAILED(retcode=args->pAddedItem->QueryInterface(IID_IXRListBoxItem,(void**)&listboxitem))) return retcode;   XRValue content; if (FAILED(retcode=listboxitem->GetContent(&content))) return retcode;   if (content.vType!=VTYPE_OBJECT) return E_FAIL;   IDirectoryItemPtr directoryitem;   if (FAILED(retcode=content.pObjectVal->QueryInterface(IID_IDirectoryItem,(void**)&directoryitem))) return retcode;   content.pObjectVal->Release(); content.pObjectVal=NULL;   BSTR fullpath=NULL;   if (FAILED(retcode=directoryitem->GetFullPath(&fullpath))) return retcode;   CurrentDir->SetText(fullpath);   wcscpy_s(curpath,fullpath); FillFileList(); SysFreeString(fullpath);     return S_OK; } }; The function uses the pAddedItem member of the XRSelectionChangedEventArgs object to retrieve the currently selected item, converts it to a IXRListBoxItem interface using QueryInterface, and then retrives its contents (IDirectoryItem object). Using the GetFullPath method we can get the full path of our selected directory and assing it to the curdir member. A call to FillFileList will update the listbox contents, displaying the list of subdirectories of the selected folder. To build our sample we just need to add code to our WinMain function: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1;   if (FAILED(retcode=DirectoryItem::RegisterUserControl(hInstance))) return retcode;   ListPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   page.FillFileList();   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return 0; } This code is very similar to the one of the WinMains of our previous samples. The main differences are that we register our custom control (you should do that as soon as you have initialized the XAML runtime) and call FillFileList after the initialization of our ListPage object to load the contents of the root folder of our device inside the listbox. As usual you can download the full sample source code from here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/ListBoxTest.zip

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • LINQ and ordering of the result set

    - by vik20000in
    After filtering and retrieving the records most of the time (if not always) we have to sort the record in certain order. The sort order is very important for displaying records or major calculations. In LINQ for sorting data the order keyword is used. With the help of the order keyword we can decide on the ordering of the result set that is retrieved after the query.  Below is an simple example of the order keyword in LINQ.     string[] words = { "cherry", "apple", "blueberry" };     var sortedWords =         from word in words         orderby word         select word; Here we are ordering the data retrieved based on the string ordering. If required the order can also be made on any of the property of the individual like the length of the string.     var sortedWords =         from word in words         orderby word.Length         select word; You can also make the order descending or ascending by adding the keyword after the parameter.     var sortedWords =         from word in words         orderby word descending         select word; But the best part of the order clause is that instead of just passing a field you can also pass the order clause an instance of any class that implements IComparer interface. The IComparer interface holds a method Compare that Has to be implemented. In that method we can write any logic whatsoever for the comparision. In the below example we are making a string comparison by ignoring the case. string[] words = { "aPPLE", "AbAcUs", "bRaNcH", "BlUeBeRrY", "cHeRry"}; var sortedWords = words.OrderBy(a => a, new CaseInsensitiveComparer());  public class CaseInsensitiveComparer : IComparer<string> {     public int Compare(string x, string y)     {         return string.Compare(x, y, StringComparison.OrdinalIgnoreCase);     } }  But while sorting the data many a times we want to provide more than one sort so that data is sorted based on more than one condition. This can be achieved by proving the next order followed by a comma.     var sortedWords =         from word in words         orderby word , word.length         select word; We can also use the reverse() method to reverse the full order of the result set.     var sortedWords =         from word in words         select word.Reverse();                                 Vikram

    Read the article

  • Get to Know a Candidate (16 of 25): Stewart Alexander&ndash;Socialist Party USA

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Alexander is an American democratic socialist politician and a resident of California. Alexander was the Peace and Freedom Party candidate for Lieutenant Governor in 2006. He received 43,319 votes, 0.5% of the total. In August 2010, Alexander declared his candidacy for the President of the United States with the Socialist Party and Green Party. In January 2011, Alexander also declared his candidacy for the presidential nomination of the Peace and Freedom Party. Stewart Alexis Alexander was born to Stewart Alexander, a brick mason and minister, and Ann E. McClenney, a nurse and housewife.  While in the Air Force Reserve, Alexander worked as a full-time retail clerk at Safeway Stores and then began attending college at California State University, Dominguez Hills. Stewart began working overtime as a stocking clerk with Safeway to support himself through school. During this period he married to Freda Alexander, his first wife. They had one son. He was honorably discharged in October 1976 and married for the second time. He left Safeway in 1978 and for a brief period worked as a licensed general contractor. In 1980, he went to work for Lockheed Aircraft but quit the following year.  Returning to Los Angeles, he became involved in several civic organizations, including most notably the NAACP (he became the Labor and Industry Chairman for the Inglewood South Bay Branch of the NAACP). In 1986 he moved back to Los Angeles and hosted a weekly talk show on KTYM Radio until 1989. The show dealt with social issues affecting Los Angeles such as gangs, drugs, and redevelopment, interviewing government officials from all levels of government and community leaders throughout California. He also worked with Delores Daniels of the NAACP on the radio and in the street. The Socialist Party USA (SPUSA) is a multi-tendency democratic-socialist party in the United States. The party states that it is the rightful continuation and successor to the tradition of the Socialist Party of America, which had lasted from 1901 to 1972. The party is officially committed to left-wing democratic socialism. The Socialist Party USA, along with its predecessors, has received varying degrees of support, when its candidates have competed against those from the Republican and Democratic parties. Some attribute this to the party having to compete with the financial dominance of the two major parties, as well as the limitations of the United States' legislatively and judicially entrenched two-party system. The Party supports third-party candidates, particularly socialists, and opposes the candidates of the two major parties. Opposing both capitalism and "authoritarian Communism", the Party advocates bringing big business under public ownership and democratic workers' self-management. The party opposes unaccountable bureaucratic control of Soviet communism. Alexander has Ballot Access in: CO, FL, NY, OH (write-in access in: IN, TX) Learn more about Stewart Alexander and Socialist Party USA on Wikipedia.

    Read the article

  • Knowledge Pathways Designer - Recommended Settings

    - by ted.henson
    The General page of the Options dialog box contains the application preferences for Knowledge Pathways Designer. It is recommended that you leave certain settings as they are, unless you have a specific reason for changing them. The following are a few of the settings on the General page with an explanation of the recommended setting. They are in the order they appear on the page: Allow version 2.0 style links: This option should remain disabled unless you were using content that was created using version 2.0 of Knowledge Pathways and you want the same linking functionality that existed in that version 2.0. This feature enables you to reuse parts of titles that contain no AUs. However, keep in mind that this type of link is not a true link, but a cross between a copy and a link. To create a 2.0 style link, you drag and drop sections between titles. You can only create 2.0 style links to sections that belong to the Title AU. When creating a version 2.0 style link, your mouse pointer will change to indicate a 2.0 link is being created. Confirm deletion of outline items and Confirm deletion of titles: It is recommended that these options remain enabled to avoid deleting something by accident. Display tracking data loss warning when opening a published title: It recommended that this option be enabled so you will receive the warning message when you open the development copy of a title, reminding you of the implications of your changes. ulCopy files when converting a Section to an Assignable Unit: This option should remain enabled unless you have a specific reason for not copying the files. If this is disabled, you will (in effect) lose your content files upon converting because they will not be copied to the new AU directory on the content root. In this case, you would need to use Windows Explorer to copy your files manually. Working with Spelling Options All of the spelling options are enabled by default. Your design team can review these options to determine if you want to make changes, depending upon your specific needs. Understanding Dictionary Options You should leave the dictionary options as they are, unless you have a specific reason for changing them. While you can delete the user (customizable) dictionary, doing so is not recommended. Setting Check In/Check Out Options The ability to check in and check out titles and AUs will impact the efficiency of your design team. Decide what your check in and check out processes are before you start developing titles. The Check In/Check Out page of the Options dialog box contains two options that affect what happens when you open a title using the Open Title dialog box. Both of these options are enabled by default and are described below: Check Out for editing enabled: This option ensures that the Check Out for editing option will be selected when you open the development copy of a title from the Open Title dialog box. If this option is disabled, you must select the Check Out for editing option every time you want to check out a title for editing. Attempt to Check Out for entire branch: When this option is enabled, Designer checks out the selected title and all AUs and sections that are part of that title, provided they are available for check out. If this option is disabled, you will only check out the Title AU and anything that belongs to that Title AU (e.g., sections, questions, etc.), but not other AUs. The Check In/Check Out page of the Options dialog box also contains options that control what happens when you close a title. You can choose one option in the Check In when Closing a Title area. The option selected is a matter of preference and you should determine which option is most appropriate for your design team.

    Read the article

  • C# Item system design approach, should I use abstract classes, interfaces or virutals?

    - by vexe
    I'm working on a Resident Evil 1/2/3/0/Remake type of game. Currently I've done a big part of the inventory system (here's a link if you wanna see my inventory, pretty outdated, added a lot of features and made a lot of enhancements) Now I'm thinking about how to approach the items system, If you've played any Resident Evil game or any of its likes you should be familiar with what I'm trying to achieve. Here's a very simple category I made for the items: So you have different items, with different operations you could perform on them, there are usable items that you could use, like for example herbs and first aid kits that 'using' them would affect your health, keys to unlock doors, and equipable items that you could 'equip' like weapons. Also, you can 'combine' two items together to get new one, like for example mixing a green and red herb would give you a new type of herb, or combining a lighter with a paper, would give you a burnt paper, or ammo with a gun, would reload the gun or something. etc. You know the usual RE drill. Not all items are 'transformable', in that, for example: lighter + paper = burnt paper (it's the paper that 'transforms' to burnt paper and not the lighter, the lighter is not transformable it will remain as it is) green herb + red herb = newHerb1 (both herbs will vanish and transform to this new type of herb) ammo + gun = reload gun (ammo state will remain as it is, it won't change but it will just decrease, nothing will happen to the gun it just gets reloaded) Also a key note to remember is that you can't just combine items randomly, each item has a 'mating' item(s). So to sum up, different items, and different operations on them. The question is, how to approach this, design-wise? I've been learning about interfaces, but it just doesn't quite get into my head, I mean, why not just use classes with the good old inheritance? I know the technical details of interfaces and that the cool thing about them is that they don't require an inheritance chain, but I just can't see how to use them properly, that is, if they were the right thing to use here. So should I go with just classes and inheritance? just like in the tree I showed you? or should I think more about how to use interfaces? (IUsable, IEquipable, ITransformable) - why not just use classes UsableItem, Equipable item, TransformableItem? I want something that won't give me headaches in the long run, something resilient/flexible to future changes. I'm OK using classes, but I smell something better here. The way I'm thinking is to possibly use both inheritance and interfaces, so that you have a branch like this: item - equipable - weapon. but then again, the weapon has methods like 'reload' 'examine' 'equip' some of them 'combine' so I'm thinking to make weapon implement ICombinable?... not all items get used the same way, using herbs will increase your health, using a key will open a door, so IUsable maybe? Should I use a big database (XML for example) for all the items, items names, mates, nRowsReq, nColsReq, etc? Thanks so much for your answers in advanced, note that demo 3 is coming after I'm done with items :D

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • Trouble rotating viewplatform in Java3D [closed]

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • How can I work efficiently on a desktop sharing workflow?

    - by OSdave
    I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has forbidden them to download the files from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a read-only ssh access to their client's server, they came up with this solution: Set up a desktop/screen sharing session between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a Zend_Debug::log() anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also forbidden them to use any version control system... My first reaction when they explained to me this was (and I actually did say it outloud to them): Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: download all files and copy of db on my pc create a git repository and a branch run the application locally use breakpoints use Zend_Debug::log() write the code and tests commit to git repo upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: Can this desktop-sharing workflow work? What should be done in order to take the most of it? Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?

    Read the article

  • Pivotcache problem using ado recordset into excel

    - by bbenton
    I'm having a problem with runtime error 1004 at the last line. I'm bringing in an access query into excel 2007. I know the recordset is ok as I can see the fields and data. Im not sure about the picotcache was created in the set ptCache line. I see the application, but the index is 0. Code is below... Private Sub cmdPivotTables_Click() Dim rs As ADODB.Recordset Dim i As Integer Dim appExcel As Excel.Application Dim wkbTo As Excel.Workbook Dim wksTo As Excel.Worksheet Dim str As String Dim strSQL As String Dim rng As Excel.Range Dim rs As DAO.Recordset Dim db As DAO.Database Dim ptCache As Excel.PivotCache Set db = CurrentDb() 'to handle case where excel is not open On Error GoTo errhandler: Set appExcel = GetObject(, "Excel.Application") 'returns to default excel error handling On Error GoTo 0 appExcel.Visible = True str = FilePathReports & "Reports SCU\SCCUExcelReports.xlsx" 'tests if the workbook is open (using workbookopen functiion) If WorkbookIsOpen("SCCUExcelReports.xlsx", appExcel) Then Set wkbTo = appExcel.Workbooks("SCCUExcelReports.xlsx") wkbTo.Save 'To ensure correct Ratios&Charts is used wkbTo.Close End If Set wkbTo = GetObject(str) wkbTo.Application.Visible = True wkbTo.Parent.Windows("SCCUExcelReports.xlsx").Visible = True Set rs = New ADODB.Recordset strSQL = "SELECT viewBalanceSheetType.AccountTypeCode AS Type, viewBalanceSheetType.AccountGroupName AS AccountGroup, " _ & "viewBalanceSheetType.AccountSubGroupName As SubGroup, qryAmountIncludingAdjustment.BranchCode AS Branch, " _ & "viewBalanceSheetType.AccountNumber, viewBalanceSheetType.AccountName, " _ & "qryAmountIncludingAdjustment.Amount, qryAmountIncludingAdjustment.MonthEndDate " _ & "FROM viewBalanceSheetType INNER JOIN qryAmountIncludingAdjustment ON " _ & "viewBalanceSheetType.AccountID = qryAmountIncludingAdjustment.AccountID " _ & "WHERE (qryAmountIncludingAdjustment.MonthEndDate = GetCurrent()) " _ & "ORDER BY viewBalanceSheetType.AccountTypeSortOrder, viewBalanceSheetType.AccountGroupSortOrder, " _ & "viewBalanceSheetType.AccountNumber;" rs.Open strSQL, CurrentProject.Connection, adOpenDynamic, adLockOptimistic ' Set rs = db.OpenRecordset("qryExcelReportsTrialBalancePT", dbOpenForwardOnly) **'**********problem here Set ptCache = wkbTo.PivotCaches.Create(SourceType:=XlPivotTableSourceType.xlExternal) Set wkbTo.PivotCaches("ptCache").Recordset = rs**

    Read the article

  • Using git subtree to clone a subdirectory of a project with versioning history then merge it back af

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ This code is from that site: git clone git://git2.kernel.org/pub/scm/git/git.git newtree=$(git subtree split --prefix=gitweb --annotate='(split) ' \ 0a8f4f0^.. --onto=1130ef3 --rejoin) git branch latest_gitweb $newtree gitk latest_gitweb Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). How would I use git-subtree to split off the bib (bibliography) directory with versioning? When I use git-subtree split --prefix=bib I get 884842f6f4e9896e2e4e9402ee0ef762cd617257 as output, but I don't know where to go from there.

    Read the article

  • git push merge error, but git pull is already up-to-date. Tried reclone, same problem.

    - by Jasie
    I do: git commit . git push error: Entry 'file.php' not uptodate. Cannot merge. Then I do git pull Already up-to-date. What do I do? I just want to get the latest version from the remote copy, and overwrite anything on my local copy. Edit: I tried everything. I deleted my local repo, and git clone ssh://[email protected]/directory ... Checking out files: 100%, done. git status On branch master nothing to commit (working directory clean) All looks good, right? Pull just in case. git pull Already up-to-date. I make a one line change in a file to see if I can push it. git commit . [master 1e18af1] Rando change 1 files changed, 2 insertions(+), 0 deletions(-) git push Counting objects: 13, done. Delta compression using up to 2 threads. Compressing objects: 100% (6/6), done. Writing objects: 100% (7/7), 646 bytes, done. Total 7 (delta 3), reused 0 (delta 0) From /directory d6d61aa..1e18af1 master -> origin/master error: Entry 'someotherfile.php' not uptodate. Cannot merge. Updating b8f9a54..1e18af1 To ssh://[email protected]/directory d6d61aa..1e18af1 master - master I have no idea what's going on! How can I commit/pull again normally? Thanks very much!

    Read the article

  • How to build AndEngine in Android Studio?

    - by marcu
    I wanted to build AndEngine and andEnginePhysicsBox2DExtension from Anchor Center branch, but build failed. FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':andEngine:compileReleaseNdk'. > com.android.ide.common.internal.LoggedErrorException: Failed to run command: /home/mariusz/android/android-ndk/ndk-build NDK_PROJECT_PATH=null APP_BUILD_SCRIPT=/home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/Android.mk APP_PLATFORM=android-17 NDK_OUT=/home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/obj NDK_LIBS_OUT=/home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/lib APP_ABI=all Error Code: 2 Output: /home/mariusz/android/android-ndk/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/../lib/gcc/arm-linux-androideabi/4.6/../../../../arm-linux-androideabi/bin/ld: /home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/obj/local/armeabi-v7a/objs/andengine_shared//home/mariusz/Downloads/AndEngineApp/andEngine/src/main/jni/src/GLES20Fix.o: in function Java_org_andengine_opengl_GLES20Fix_glVertexAttribPointer:/home/mariusz/Downloads/AndEngineApp/andEngine/src/main/jni/src/GLES20Fix.c:9: error: undefined reference to 'glVertexAttribPointer' /home/mariusz/android/android-ndk/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/../lib/gcc/arm-linux-androideabi/4.6/../../../../arm-linux-androideabi/bin/ld: /home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/obj/local/armeabi-v7a/objs/andengine_shared//home/mariusz/Downloads/AndEngineApp/andEngine/src/main/jni/src/GLES20Fix.o: in function Java_org_andengine_opengl_GLES20Fix_glDrawElements:/home/mariusz/Downloads/AndEngineApp/andEngine/src/main/jni/src/GLES20Fix.c:13: error: undefined reference to 'glDrawElements' collect2: ld returned 1 exit status make: *** [/home/mariusz/Downloads/AndEngineApp/andEngine/build/intermediates/ndk/release/obj/local/armeabi-v7a/libandengine_shared.so] Error 1 I'm using Android Studio version 0.86.

    Read the article

  • Heroku and Github integration (how to structure the project)

    - by Noah
    I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here: I want to have the following directory structure: /project .git README <-- project readme file TODO.otl <-- project outline ... <-- other project-related stuff /my_rails_app app config ... README <-- rails' readme file In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing? I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense. Thanks, Noah P.S. I tried just setting it up as described above and running git push heroku from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder: -----> Heroku receiving push ! Heroku push rejected, no Rails or Rack app detected.

    Read the article

  • Recover a folder or file in TortoiseSVN whilst also retaining all history.

    - by Topdown
    In revision 1 a folder existed. In revision 2 the folder was accidently deleted and the change committed. We wish to roll back such that the folder is present, and retain its history. In the TortoiseSVN docs it indicates 'how' in the section titled "Getting a deleted file or folder back". To quote: Getting a deleted file or folder back If you have deleted a file or a folder and already committed that delete operation to the repository, then a normal TortoiseSVN - Revert can't bring it back anymore. But the file or folder is not lost at all. If you know the revision the file or folder got deleted (if you don't, use the log dialog to find out) open the repository browser and switch to that revision. Then select the file or folder you deleted, right-click and select [Context Menu] - [Copy to...] as the target for that copy operation select the path to your working copy. A switch retrieves the file into my working copy as one would expect, however there is no "Copy to" option on the context menu when I right click this working copy. If I open the repos browser, there is a copy to option, but it seems this simply takes a copy of the file. The solution I feel is to do a Branch/Tag, but if I try this from a prior revision to the same path in the repository SVN throws error that the path already exists. Therefore, how do I recover a folder/file in TortoiseSVN whilst also retaining all history. TortoiseSVN v1.6.8, Build 19260 - 32 Bit , Subversion 1.6.11,

    Read the article

  • Segmentation fault on MPI, runs properly on OpenMP

    - by Bellman
    Hi, I am trying to run a program on a computer cluster. The structure of the program is the following: PROGRAM something ... CALL subroutine1(...) ... END PROGRAM SUBROUTINE subroutine1(...) ... DO i=1,n CALL subroutine2(...) ENDDO ... END SUBROUTINE SUBROUTINE subroutine2(...) ... CALL subroutine3(...) CALL subroutine4(...) ... END SUBROUTINE The idea is to parallelize the loop that calls subroutine2. Main program basically only makes the call to subroutine1 and only its arguments are declared. I use two alternatives. On the one hand, I write OpenMP clauses arround the loop. On the other hand, I add an IF conditional branch arround the call and I use MPI to share the results. In the OpenMP case, I add CALL KMP_SET_STACKSIZE(402653184) at the beginning of the main program and I can run it with 8 threads on an 8 core machine. When I run it (on the same 8 core machine) with MPI (either using 8 or 1 processors) it crashes just when makes the call to subroutine3 with a segmentation fault (signal 11) error. If I comment subroutine4, then it doesn't crash (notice that it crashed just when calling subroutine3 and it works when commenting subroutine4). I compile with mpif90 using MPICH2 libraries and the following flags: -O3 -fpscomp logicals -openmp -threads -m64 -xS. The machine has EM64T architecture and I use a Debian Linux distribution. I set ulimit -s hard before running the program. Any ideas on what is going on? Has it something to do with stack size? Thanks in advance

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >