Search Results

Search found 766 results on 31 pages for 'simplicity'.

Page 25/31 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • Compare NSArray with NSMutableArray adding delta objects to NSMutableArray

    - by Hooligancat
    I have an NSMutableArray that is populated with objects of strings. For simplicity sake we'll say that the objects are a person and each person object contains information about that person. Thus I would have an NSMutableArray that is populated with person objects: person.firstName person.lastName person.age person.height And so on. The initial source of data comes from a web server and is populated when my application loads and completes it's initialization with the server. Periodically my application polls the server for the latest list of names. Currently I am creating an NSArray of the result set, emptying the NSMutableArray and then re-populating the NSMutableArray with NSArray results before destroying the NSArray object. This seems inefficient to me on a few levels and also presents me with a problem losing table row references which I can work around, but might be creating more work for myself in doing so. The inefficiency seems to be that I should be able to compare the two arrays and end up with a filtered NSArray. I could then add the filtered set to the NSMutableArray. This would mean that I can simply append new data to the NSMutableArray instead of throwing everything out and re-populating. Conversely I would need to do the same filter in reverse to see if there are records that need removing from the NSMutableArray. Is there any method to do this in a more efficient manner? Have I overlooked something in the docs some place that refers to a simpler technique? I have a problem when I empty the NSMutableArray and re-populate in that any referencing tables lose their selected row state. I can track it and re-select it, but my theory is that using some form of compare and adding objects and removing objects instead of dealing with the whole array in one block might mean I keep my row reference (assuming the item isn't deleted of course). Any suggestions or help much appreciated. Update Would it be just as fast to do a fast enumeration over each comparing each line item as I go? It seems like an expensive operation, but with the last fast enumeration code it might be pretty efficient...

    Read the article

  • Databinding to ObservableCollection in a different UserControl?

    - by Dave
    Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • Using MVC2 to update an Entity Framework v4 object with foreign keys fails

    - by jbjon
    With the following simple relational database structure: An Order has one or more OrderItems, and each OrderItem has one OrderItemStatus. Entity Framework v4 is used to communicate with the database and entities have been generated from this schema. The Entities connection happens to be called EnumTestEntities in the example. The trimmed down version of the Order Repository class looks like this: public class OrderRepository { private EnumTestEntities entities = new EnumTestEntities(); // Query Methods public Order Get(int id) { return entities.Orders.SingleOrDefault(d => d.OrderID == id); } // Persistence public void Save() { entities.SaveChanges(); } } An MVC2 app uses Entity Framework models to drive the views. I'm using the EditorFor feature of MVC2 to drive the Edit view. When it comes to POSTing back any changes to the model, the following code is called: [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { // Get the current Order out of the database by ID Order order = orderRepository.Get(id); var orderItems = order.OrderItems; try { // Update the Order from the values posted from the View UpdateModel(order, ""); // Without the ValueProvider suffix it does not attempt to update the order items UpdateModel(order.OrderItems, "OrderItems.OrderItems"); // All the Save() does is call SaveChanges() on the database context orderRepository.Save(); return RedirectToAction("Details", new { id = order.OrderID }); } catch (Exception e) { return View(order); // Inserted while debugging } } The second call to UpdateModel has a ValueProvider suffix which matches the auto-generated HTML input name prefixes that MVC2 has generated for the foreign key collection of OrderItems within the View. The call to SaveChanges() on the database context after updating the OrderItems collection of an Order using UpdateModel generates the following exception: "The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted." When debugging through this code, I can still see that the EntityKeys are not null and seem to be the same value as they should be. This still happens when you are not changing any of the extracted Order details from the database. Also the entity connection to the database doesn't change between the act of Getting and the SaveChanges so it doesn't appear to be a Context issue either. Any ideas what might be causing this problem? I know EF4 has done work on foreign key properties but can anyone shed any light on how to use EF4 and MVC2 to make things easy to update; rather than having to populate each property manually. I had hoped the simplicity of EditorFor and DisplayFor would also extend to Controllers updating data. Thanks

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • Shawn Wildermuth violating MVVM in MSDN article?

    - by rasx
    This may be old news but back in March 2009, Shawn Wildermuth, his article, “Model-View-ViewModel In Silverlight 2 Apps,” has a code sample that includes DataServiceEntityBase: // COPIED FROM SILVERLIGHTCONTRIB Project for simplicity /// <summary> /// Base class for DataService Data Contract classes to implement /// base functionality that is needed like INotifyPropertyChanged. /// Add the base class in the partial class to add the implementation. /// </summary> public abstract class DataServiceEntityBase : INotifyPropertyChanged { /// <summary> /// The handler for the registrants of the interface's event /// </summary> PropertyChangedEventHandler _propertyChangedHandler; /// <summary> /// Allow inheritors to fire the event more simply. /// </summary> /// <param name="propertyName"></param> protected void FirePropertyChanged(string propertyName) { if (_propertyChangedHandler != null) { _propertyChangedHandler(this, new PropertyChangedEventArgs(propertyName)); } } #region INotifyPropertyChanged Members /// <summary> /// The interface used to notify changes on the entity. /// </summary> event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged { add { _propertyChangedHandler += value; } remove { _propertyChangedHandler -= value; } } #endregion What this class implies is that the developer intends to bind visuals directly to data (yes, a ViewModel is used but it defines an ObservableCollection of data objects). Is this design diverging too far from the guidance of MVVM? Now I can see some of the reasons why Shawn would go this way: what Shawn can do with DataServiceEntityBase is this sort of thing (which is intimate with the Entity Framework): // Partial Method to support the INotifyPropertyChanged interface public partial class Game : DataServiceEntityBase { #region Partial Method INotifyPropertyChanged Implementation // Override the Changed partial methods to implement the // INotifyPropertyChanged interface // This helps with the Model implementation to be a mostly // DataBound implementation partial void OnDeveloperChanged() { base.FirePropertyChanged("Developer"); } partial void OnGenreChanged() { base.FirePropertyChanged("Genre"); } partial void OnListPriceChanged() { base.FirePropertyChanged("ListPrice"); } partial void OnListPriceCurrencyChanged() { base.FirePropertyChanged("ListPriceCurrency"); } partial void OnPlayerInfoChanged() { base.FirePropertyChanged("PlayerInfo"); } partial void OnProductDescriptionChanged() { base.FirePropertyChanged("ProductDescription"); } partial void OnProductIDChanged() { base.FirePropertyChanged("ProductID"); } partial void OnProductImageUrlChanged() { base.FirePropertyChanged("ProductImageUrl"); } partial void OnProductNameChanged() { base.FirePropertyChanged("ProductName"); } partial void OnProductTypeIDChanged() { base.FirePropertyChanged("ProductTypeID"); } partial void OnPublisherChanged() { base.FirePropertyChanged("Publisher"); } partial void OnRatingChanged() { base.FirePropertyChanged("Rating"); } partial void OnRatingUrlChanged() { base.FirePropertyChanged("RatingUrl"); } partial void OnReleaseDateChanged() { base.FirePropertyChanged("ReleaseDate"); } partial void OnSystemNameChanged() { base.FirePropertyChanged("SystemName"); } #endregion } Of course MSDN code can seen as “toy code” for educational purposes but is anyone doing anything like this in the real world of Silverlight development?

    Read the article

  • Best practice to handle Parent Form Child Form relation using Presentation Model

    - by Rajarshi
    According to Presentation Model notes by Martin Fowler and also on MSDN documentation about Presentation Model, it is explained that the Presentation Model Class should be unaware of the UI class and similarly Business Model Class should be unaware of the Presentation Model class. The UI should databind extensively to the Presentation Model, the Presentation Model in turn will co-ordinate with one or more Domain/Business Model objects to get the job done. The Presentation Model basically presents the Domain Model data in a way to facilitate maximum data binding in UI, allowing the UI take as less decisions as possible and thus increase testability of Presentation behaviours. This also makes the presentation model class generic, i.e. agnostic of any particular UI technology. Now, consider there is a List form (say CustomerList) and there is another Root form (say Customer) and there is a Use Case of allowing to Edit a Customer from the CustomerList form on a button click. For simplicity of discussion, consider that some actions took place when Customer List is opened from menu (i.e. Customer menu clicked) and the Customer List has been shown from the Menu click event. Now as per the above Use Case, I need to open the Customer Root UI (single Customer) from the Customer List. How do I do that? Build necessary objects (BusinessModel, PresentationModel, UI) in click event of Edit button and call CustomerEdit UI from there? Build the CustomerEdit UI from Presentation Model Class and show UI from presentation model? this can be done in any of the two ways below - a. Create objects in the following sequence DomainModel-PresentationModel-UIForm b. Use Unity.Resolve(); Either ways, Presentation Model is violated as the P model now has to the refer the concrete UI assembly, where CustomerEdit is located. Also the P Model has to refer and use a WinForm directly making it less UI technology agnostic. Even though the violations are in theory and can be ignored, I would still seek the community's opinion about whether I am going wrong direction. Please suggest if there's a better way to call the Child Form from the List (Parent) Form. Rajarshi

    Read the article

  • Why not .NET-style delegates rather than closures in Java?

    - by h2g2java
    OK, this is going to be my beating a dying horse for the 3rd time. However, this question is different from my earlier two about closures/delegates, which asks about plans for delegates and what are the projected specs and implementation for closures. This question is about - why is the Java community struggling to define 3 different types of closures when we could simply steal the whole concept of delegates lock, stock and barrel from our beloved and friendly neighbour - Microsoft. There are two non-technical conclusions I would be very tempted to jump into: The Java community should hold up its pride, at the cost of needing to go thro convoluted efforts, by not succumbing to borrowing any Microsoft concepts or otherwise vindicate Microsoft's brilliance. Delegates is a Microsoft patented technology. Alright, besides the above two possibilities, Q1. Is there any weakness or inadequacy in msft-styled delegates that the three (or more) forms of closures would be addressing? Q2. I am asking this while shifting between java and c# and it intrigues me that c# delegates does exactly what I needed. Are there features that would be implemented in closures that are not currently available in C# delegates? If so what are they because I cannot see what I need more than what C# delegates has adequately provided me? Q3. I know that one of the concerns about implementing closures/delegates in java is the reduction of orthogonality of the language, where more than one way is exposed to perform a particular task. Is it worth the level convolution and time spent to avoid delegates just to ensure java retains its level of orthogonality? In SQL, we know that it is advisable to break orthogonality by frequently adequately satisfying only the 2nd normal form. Why can't java be subjected to reduction of orthogonality and OO-ness for the sake of simplicity? Q4. The architecture of JVM is technically constrained from implementing .NET-styled delegates. If this reason WERE (subjunctive to emphasize unlikelihood) true, then why can't the three closures proposals be hidden behind a simple delegate keyword or annotation: if we don't like to use @delegate, we could use @method. I cannot see how delegate statement format is more complex than the three closure proposals.

    Read the article

  • Ninject.Web.PageBase still resulting in null reference to injected dependency

    - by Ted
    I have an ASP.NET 3.5 WebForms application using Ninject 2.0. However, attempting to use the Ninject.Web extension to provide injection into System.Web.UI.Page, I'm getting a null reference to my injected dependency even though if I switch to using a service locator to provide the reference (using Ninject), there's no issue. My configuration (dumbed down for simplicity): public partial class Default : PageBase // which is Ninject.Web.PageBase { [Inject] public IClubRepository Repository { get; set; } protected void Page_Load(object sender, EventArgs e) { var something = Repository.GetById(1); // results in null reference exception. } } ... //global.asax.cs public class Global : Ninject.Web.NinjectHttpApplication { /// <summary> /// Creates a Ninject kernel that will be used to inject objects. /// </summary> /// <returns> /// The created kernel. /// </returns> protected override IKernel CreateKernel() { IKernel kernel = new StandardKernel(new MyModule()); return kernel; } .. ... public class MyModule : NinjectModule { public override void Load() { Bind<IClubRepository>().To<ClubRepository>(); //... } } Getting the IClubRepository concrete instance via a service locator works fine (uses same "MyModule"). I.e. private readonly IClubRepository _repository = Core.Infrastructure.IoC.TypeResolver.Get<IClubRepository>(); What am I missing? [Update] Finally got back to this, and it works in Classic Pipeline mode, but not Integrated. Is the classic pipeline a requirement? [Update 2] Wiring up my OnePerRequestModule was the problem (which had removed in above example for clarity): protected override IKernel CreateKernel() { var module = new OnePerRequestModule(); module.Init(this); IKernel kernel = new StandardKernel(new MyModule()); return kernel; } ...needs to be: protected override IKernel CreateKernel() { IKernel kernel = new StandardKernel(new MyModule()); var module = new OnePerRequestModule(); module.Init(this); return kernel; } Thus explaining why I was getting a null reference exception under integrated pipeline (to a Ninject injected dependency, or just a page load for a page inheriting from Ninject.Web.PageBase - whatever came first).

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • Javascript childNodes does not find all children of a div when appendchild has been used

    - by yesterdayze
    Alright, I am hoping someone can help me out. I apologize up front that this one may be confusing. I have included an example to try to help ease the confusion as this is better seen then heard. I have created a webpage that contains a group or set of groups. Each group has a subgroup. In a nutshell what is happening is this page will allow me to combine multiple groups containing subgroups into a new group. The page will give the chance to rename the old subgroups before they are combined into new groups in order to avoid confusion. When a group is renamed it will check to make sure there is not already a group with that name. If there is it will copy itself out of it's own group and into that group and then delete the original. If the group does not already exist it will create that group, copy itself in and then delete the original. Subgroups can also be renamed at which point they will move into the group with the same name if it exists, or create a new one if it doesn't. The page has a main div. The main div contains 'new sub group' divs. Inside each of those is another div containing the 'old sub group' divs. I use a loop through the child nodes of the 'new sub group' div when renaming a group in order to find each child node. These are then copied into a new div within the main div. The crux of the problem is this. If I loop through a DIV and copy all of the DIVs in it into a new or existing DIV all is well. When I then try to take that DIV and copy all of it's DIVs into another or new DIV it always skips one of the moved DIVs. For simplicity I have copied the entire working code below. To recreate the issue click the spot where the image should appear next to the name ewrewrwe and rename it to something else. All is well. Now click that new group the same way and name it something else. You will see it skip one each time. I have linked the page here: http://vtbikenight.com/test.html The link is clean, it is my personal website I use for a local motorycle group I am part of. Thanks for the help everyone!!! Please let me know if I can clarify on anything. I know the code is not the best right now, it is just demo code and my intent is to get the concept working then streamline it all.

    Read the article

  • Entity Framework in n-layered application - Lazy loading vs. Eager loading patterns

    - by Marconline
    Hi all. This questions doesn't let me sleep as it's since one year I'm trying to find a solution but... still nothing happened in my mind. Probably you can help me, because I think this is a very common issue. I've a n-layered application: presentation layer, business logic layer, model layer. Suppose for simplicity that my application contains, in the presentation layer, a form that allows a user to search for a customer. Now the user fills the filters through the UI and clicks a button. Something happens and the request arrives to presentation layer to a method like CustomerSearch(CustomerFilter myFilter). This business logic layer now keeps it simple: creates a query on the model and gets back results. Now the question: how do you face the problem of loading data? I mean business logic layer doesn't know that that particular method will be invoked just by that form. So I think that it doesn't know if the requesting form needs just the Customer objects back or the Customer objects with the linked Order entities. I try to explain better: our form just wants to list Customers searching by surname. It has nothing to do with orders. So the business logic query will be something like: (from c in ctx.CustomerSet where c.Name.Contains(strQry) select c).ToList(); now this is working correctly. Two days later your boss asks you to add a form that let you search for customers like the other and you need to show the total count of orders created by each customer. Now I'd like to reuse that query and add the piece of logic that attach (includes) orders and gets back that. How would you front this request? Here is the best (I think) idea I had since now. I'd like to hear from you: my CustomerSearch method in BLL doesn't create the query directly but passes through private extension methods that compose the ObjectQuery like: private ObjectQuery<Customer> SearchCustomers(this ObjectQuery<Customer> qry, CustomerFilter myFilter) and private ObjectQuery<Customer> IncludeOrders(this ObjectQuery<Customer> qry) but this doesn't convince me as it seems too complex. Thanks, Marco

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Android HttpsURLConnection and JSON for new GCM

    - by Ryan Gray
    I'm overhauling certain parts of my app to use the new GCM service to replace C2DM. I simply want to create the JSON request from a Java program for testing and then read the response. As of right now I can't find ANY formatting issues with my JSON request and the google server always return code 400, which indicates a problem with my JSON. http://developer.android.com/guide/google/gcm/gcm.html#server JSONObject obj = new JSONObject(); obj.put("collapse_key", "collapse key"); JSONObject data = new JSONObject(); data.put("info1", "info_1"); data.put("info2", "info 2"); data.put("info3", "info_3"); obj.put("data", data); JSONArray ids = new JSONArray(); ids.add(REG_ID); obj.put("registration_ids", ids); System.out.println(obj.toJSONString()); I print my request to the eclipse console to check it's formatting byte[] postData = obj.toJSONString().getBytes(); try{ URL url = new URL("https://android.googleapis.com/gcm/send"); HttpsURLConnection.setDefaultHostnameVerifier(new JServerHostnameVerifier()); HttpsURLConnection conn = (HttpsURLConnection) url.openConnection(); conn.setDoOutput(true); conn.setDoInput(true); conn.setUseCaches(false); conn.setRequestMethod("POST"); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("Authorization", "key=" + API_KEY); System.out.println(conn.toString()); OutputStream out = conn.getOutputStream(); // exception thrown right here. no InputStream to get InputStream in = conn.getInputStream(); byte[] response = null; out.write(postData); out.close(); in.read(response); JSONParser parser = new JSONParser(); String temp = new String(response); JSONObject temp1 = (JSONObject) parser.parse(temp); System.out.println(temp1.toJSONString()); int responseCode = conn.getResponseCode(); System.out.println(responseCode + ""); } catch(Exception e){ System.out.println("Exception thrown\n"+ e.getMessage()); } } I'm sure my API key is correct as that would result in error 401, so says the google documentation. This is my first time doing JSON but it's easy to understand because of its simplicity. Anyone have any ideas on why I always receive code 400? update: I've tested the google server example classes provided with gcm so the problem MUST be with my code.

    Read the article

  • How can I ensure my programmatic uploads are done in the correct order?

    - by ccomet
    In our application, we store two copies of a file - an approved one and an unapproved one. Both track their versions separately. When the unapproved is then approved, all of its versions are added as new versions to the approved file. To do this properly, my code has to upload each version separately into the approved folder, and update the item each time with that version's information. For some reason, though, this doesn't always work properly. In my latest scenario, the latest version was uploaded first, and then all of the remaining versions were uploaded afterwards. However, my code explicitly is supposed to upload the other versions first, that's the order I wrote it in. Why is this happening? And if it is possible, how do I ensure that the versions are uploaded in the correct order? Clarification - It's not a problem with the enumeration - I'm getting the previous versions in the correct order. What is happening is that the final version, which is written after the loop, is being uploaded before the loop. Which really doesn't make any sense to me. Here's a condensed version of the relevant code. //These three are initialized earlier in the code. SPList list; //The document library SPListItem item; //The list item in the Unapproved folder int AID; //The item id of the corresponding item in the Approved folder. byte[] contents; //Not initialized. /* These uploads are happening second when they should happen first. */ if (item.File.Versions.Count > 0) { //This loop is actually a separate method call if that matters. //For simplicity I expanded it here. foreach (SPFileVersion fVer in item.File.Versions) { if (!fVer.IsCurrentVersion) { contents = fVer.OpenBinary(); SPFile fSub = aFolder.Files.Add(fVer.File.Name, contents, u1, fVer.CreatedBy, dt1, fVer.Created); SPListItem subItem = list.GetItemById(AID); //This method updates the newly uploaded version with the field data of that version. UpdateFields(item.Versions.GetVersionFromLabel(fVer.VersionLabel), subItem); } } } /* This upload happens first when it should happen last. */ //Does the same as earlier loop, but for the final version. contents = item.File.OpenBinary(); SPFile f = aFolder.Files.Add(item.File.Name, contents, u1, u2, dt1, dt2); SPListItem finalItem = list.GetItemById(AID); UpdateFields(item.Versions[0], finalItem); item.Delete();

    Read the article

  • Does this MSDN article violate MVVM?

    - by rasx
    This may be old news but back in March 2009, this article, “Model-View-ViewModel In Silverlight 2 Apps,” has a code sample that includes DataServiceEntityBase: // COPIED FROM SILVERLIGHTCONTRIB Project for simplicity /// <summary> /// Base class for DataService Data Contract classes to implement /// base functionality that is needed like INotifyPropertyChanged. /// Add the base class in the partial class to add the implementation. /// </summary> public abstract class DataServiceEntityBase : INotifyPropertyChanged { /// <summary> /// The handler for the registrants of the interface's event /// </summary> PropertyChangedEventHandler _propertyChangedHandler; /// <summary> /// Allow inheritors to fire the event more simply. /// </summary> /// <param name="propertyName"></param> protected void FirePropertyChanged(string propertyName) { if (_propertyChangedHandler != null) { _propertyChangedHandler(this, new PropertyChangedEventArgs(propertyName)); } } #region INotifyPropertyChanged Members /// <summary> /// The interface used to notify changes on the entity. /// </summary> event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged { add { _propertyChangedHandler += value; } remove { _propertyChangedHandler -= value; } } #endregion What this class implies is that the developer intends to bind visuals directly to data (yes, a ViewModel is used but it defines an ObservableCollection of data objects). Is this design diverging too far from the guidance of MVVM? Now I can see some of the reasons Why would we go this way: what we can do with DataServiceEntityBase is this sort of thing (which is intimate with the Entity Framework): // Partial Method to support the INotifyPropertyChanged interface public partial class Game : DataServiceEntityBase { #region Partial Method INotifyPropertyChanged Implementation // Override the Changed partial methods to implement the // INotifyPropertyChanged interface // This helps with the Model implementation to be a mostly // DataBound implementation partial void OnDeveloperChanged() { base.FirePropertyChanged("Developer"); } partial void OnGenreChanged() { base.FirePropertyChanged("Genre"); } partial void OnListPriceChanged() { base.FirePropertyChanged("ListPrice"); } partial void OnListPriceCurrencyChanged() { base.FirePropertyChanged("ListPriceCurrency"); } partial void OnPlayerInfoChanged() { base.FirePropertyChanged("PlayerInfo"); } partial void OnProductDescriptionChanged() { base.FirePropertyChanged("ProductDescription"); } partial void OnProductIDChanged() { base.FirePropertyChanged("ProductID"); } partial void OnProductImageUrlChanged() { base.FirePropertyChanged("ProductImageUrl"); } partial void OnProductNameChanged() { base.FirePropertyChanged("ProductName"); } partial void OnProductTypeIDChanged() { base.FirePropertyChanged("ProductTypeID"); } partial void OnPublisherChanged() { base.FirePropertyChanged("Publisher"); } partial void OnRatingChanged() { base.FirePropertyChanged("Rating"); } partial void OnRatingUrlChanged() { base.FirePropertyChanged("RatingUrl"); } partial void OnReleaseDateChanged() { base.FirePropertyChanged("ReleaseDate"); } partial void OnSystemNameChanged() { base.FirePropertyChanged("SystemName"); } #endregion } Of course MSDN code can seen as “toy code” for educational purposes but is anyone doing anything like this in the real world of Silverlight development?

    Read the article

  • Learning to create beautiful /next-generation GUI

    - by ShaChris23
    I really want to create a stunning-looking GUI desktop application that looks like, for example: Mac OS X interface Picasa desktop client on windows IPhone apps Office 2007 I've mostly been programming GUI using Qt/Swing/WinForm and I'm tired of creating so plain looking GUI, lol. So I was thinking about diving into stuff like: jQuery WPF/C# iPhone SDK Silverlight Adobe Air/Flex Just to get some ideas on how to create really cool looking UI. Does that sound like a good list? Any developers here that could share what platform they use to create very cool looking desktop app? On a sidenote, I really wonder what developers at Apple / Microsoft use to develop their own cool-looking software. EDIT A lot of responses talk about the importance of usability over "cool-looking".. I totally agree that usability and simplicity are the most important aspects of user interface design. I've been doing GUI development for a while now ( 3 years), so that I understand. But using cool-looking UI also improves user experience + it could make big difference on whether or not your software sells. I mean, otherwise why would Microsoft/Apple try to make their OS UI look "cooler" everytime there's a new version? Why would websites like pragprog.com, or versionsapp.com. make their websites look like that? Basically you kill 2 birds with one stone: stunnning-looking UI + super usability (because it looks simple and intuitive). That is what I'm striving for. And as far as I know, I cannot achieve that using Qt/Winform. Most of the books I have read just show you how to make average-looking (read: 1990's) UI. I want to learn how to create cool-looking UI. And the only place I see cool-looking UIs these days are the technology I list above. I'm not enamored with any technology; but I just want to know how things are done in other technology to see if I could apply them to the technology I'm using, or see if I could use those technology in my line of work. An example: if I were to pick between this UI and this UI, I probably would pick the latter, if just based on looks alone. Functionally, they are just the same UI; they both allow you to keep track of your time. They both contain buttons and textboxes, etc. But the fact that they look different, also differentiate them in terms of attractiveness. So in all, I think the "ice on the cake" is very important. I would say it's the thing you strive for after you are certain you have a totally intuitive, usable UI.

    Read the article

  • UNIX: Replace Newline w/ Colon, Preserving Newline Before EOF

    - by Maarx
    I have a text file ("INPUT.txt") of the format: A<LF> B<LF> C<LF> D<LF> X<LF> Y<LF> Z<LF> <EOF> which I need to reformat to: A:B:C:D:X:Y:Z<LF> <EOF> I know you can do this with 'sed'. There's a billion google hits for doing this with 'sed'. But I'm trying to emphasis readability, simplicity, and using the correct tool for the correct job. 'sed' is a line editor that consumes and hides newlines. Probably not the right tool for this job! I think the correct tool for this job would be 'tr'. I can replace all the newlines with colons with the command: cat INPUT.txt | tr '\n' ':' There's 99% of my work done. I have a problem, now, though. By replacing all the newlines with colons, I not only get an extraneous colon at the end of the sequence, but I also lose the carriage return at the end of the input. It looks like this: A:B:C:D:X:Y:Z:<EOF> Now, I need to remove the colon from the end of the input. However, if I attempt to pass this processed input through 'sed' to remove the final colon (which would now, I think, be a proper use of 'sed'), I find myself with a second problem. The input is no longer terminated by a newline at all! 'sed' fails outright, for all commands, because it never finds the end of the first line of input! It seems like appending a newline to the end of some input is a very, very common task, and considering I myself was just sorely tempted to write a program to do it in C (which would take about eight lines of code), I can't imagine there's not already a very simple way to do this with the tools already available to you in the Linux kernel.

    Read the article

  • Can simple javascript inheritance be simplified even further?

    - by Will
    John Resig (of jQuery fame) provides a concise and elegant way to allow simple JavaScript inheritance. It was so short and sweet, in fact, that it inspired me to try and simplify it even further (see code below). I've modified his original function such that it still passes all his tests and has the potential advantage of: readability (50% less code) simplicity (you don't have to be a ninja to understand it) performance (no extra wrappers around super/base method calls) consistency with C#'s base keyword Because this seems almost too good to be true, I want to make sure my logic doesn't have any fundamental flaws/holes/bugs, or if anyone has additional suggestions to improve or refute the code (perhaps even John Resig could chime in here!). Does anyone see anything wrong with my approach (below) vs. John Resig's original approach? if (!window.Class) { window.Class = function() {}; window.Class.extend = function(members) { var prototype = new this(); for (var i in members) prototype[i] = members[i]; prototype.base = this.prototype; function object() { if (object.caller == null && this.initialize) this.initialize.apply(this, arguments); } object.constructor = object; object.prototype = prototype; object.extend = arguments.callee; return object; }; } And the tests (below) are nearly identical to the original ones except for the syntax around base/super method calls (for the reason enumerated above): var Person = Class.extend( { initialize: function(isDancing) { this.dancing = isDancing; }, dance: function() { return this.dancing; } }); var Ninja = Person.extend( { initialize: function() { this.base.initialize(false); }, dance: function() { return this.base.dance(); }, swingSword: function() { return true; } }); var p = new Person(true); alert("true? " + p.dance()); // => true var n = new Ninja(); alert("false? " + n.dance()); // => false alert("true? " + n.swingSword()); // => true alert("true? " + (p instanceof Person && p instanceof Class && n instanceof Ninja && n instanceof Person && n instanceof Class));

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Writing reports with Perl

    - by georgemp
    Hi, I am trying to write out multiple report files using perl. Each file has the same structure, but with different data. So, my basic code looks something like #begin code our $log_fh; open %log_fh, ">" . $logfile our $rep; if (multipleReports) { while (@reports) { printReport($report[0]); } } sub printReports { open $rep, ">" . $[0]; printHeaders(); printBody(); close $rep; } sub printHeader() { format HDR = @>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $generatedLine . format HDR_TOP = . $rep->format_name("HDR"); $rep->format_top_name("HDR_TOP"); $generatedLine = "test"; write($rep); $generatedLine = "next item"; write($rep); $generatedLine = "last header item"; write($rep); } sub printBody #There are multiple such sections in my code. For simplicity, I have just shown 1 here { #declare own header and header top. Set report to use these and print items to $rep } #end code The above is just a high level of the code I am using and I hope I have captured all the salient points. However, for some reason, I get the first report file output correctly. The second file instead of having in the first section test next item last item reads last item last item last item I have tried a whole lot of options primarily around autoflush, but, for the life of me can't figure out why it is doing this. I am using Perl 5.8.2. Any help/pointers much appreciated. Thanks George

    Read the article

  • Scala actors: receive vs react

    - by jqno
    Let me first say that I have quite a lot of Java experience, but have only recently become interested in functional languages. Recently I've started looking at Scala, which seems like a very nice language. However, I've been reading about Scala's Actor framework in Programming in Scala, and there's one thing I don't understand. In chapter 30.4 it says that using react instead of receive makes it possible to re-use threads, which is good for performance, since threads are expensive in the JVM. Does this mean that, as long as I remember to call react instead of receive, I can start as many Actors as I like? Before discovering Scala, I've been playing with Erlang, and the author of Programming Erlang boasts about spawning over 200,000 processes without breaking a sweat. I'd hate to do that with Java threads. What kind of limits am I looking at in Scala as compared to Erlang (and Java)? Also, how does this thread re-use work in Scala? Let's assume, for simplicity, that I have only one thread. Will all the actors that I start run sequentially in this thread, or will some sort of task-switching take place? For example, if I start two actors that ping-pong messages to each other, will I risk deadlock if they're started in the same thread? According to Programming in Scala, writing actors to use react is more difficult than with receive. This sounds plausible, since react doesn't return. However, the book goes on to show how you can put a react inside a loop using Actor.loop. As a result, you get loop { react { ... } } which, to me, seems pretty similar to while (true) { receive { ... } } which is used earlier in the book. Still, the book says that "in practice, programs will need at least a few receive's". So what am I missing here? What can receive do that react cannot, besides return? And why do I care? Finally, coming to the core of what I don't understand: the book keeps mentioning how using react makes it possible to discard the call stack to re-use the thread. How does that work? Why is it necessary to discard the call stack? And why can the call stack be discarded when a function terminates by throwing an exception (react), but not when it terminates by returning (receive)? I have the impression that Programming in Scala has been glossing over some of the key issues here, which is a shame, because otherwise it's a truly excellent book.

    Read the article

  • I've got my 2D/3D conversion working perfectly, how to do perspective

    - by user346992
    Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math. Although its a 2.5D world, lets pretend its just 2d for this question. // xa: x-accent, the x coordinate of the projection // mapP: a coordinate on a map which need to be projected // _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection xa = mapP.x * xDistX + mapP.y * xDistY; ya = mapP.x * yDistX + mapP.y * yDistY; xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity). x-axis-angle = atan(yDistX/xDistX) y-axis-angle = atan(yDistY/yDistY) a "normal" coordinate system like this --------------- x | | | | | y has values like this: xDistX = 1; yDistX = 0; xDistY = 0; YDistY = 1; So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down. When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this). So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective. I wanted to add some perspective to this grid so i added some extra's like this: camera = new MapPoint(60, 60); dx = mapP.x - camera.x; // delta x dy = mapP.y - camera.y; // delta y dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera fac = 1 - dist / 100; // this formula determines the amount of perspective xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ; ya = fac * (mapP.x * yDistX + mapP.y * yDistY ); Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y). For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this. ( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31  | Next Page >