Search Results

Search found 12404 results on 497 pages for 'native types'.

Page 476/497 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • wpf display staggered content

    - by Chris Cap
    I am trying to display a rather dynamic list of data in WPF. I have essentially a LineItem class that contains a list of strings and a line type. The line type separates different categories of line items. All line items with the same type should be displayed the same and their data should line up. For example, this list will contain an order summary. And the there will be a line type that represents something with a width and height. The width and height must line up vertically. However, there may be other line types that don't have to line up vertically. I want to produce a table similar to what you see below: ------------------------------------------------------------------ | some content here | some more content here | last content here | |----------------------------------------------------------------| | some content here | | last content here | |----------------------------------------------------------------| | spanning content that is longer then most | last content here | |----------------------------------------------------------------| | some content that can span a really long distance | ------------------------------------------------------------------ I attempted to do this by creating ListView with a single column that had a datatemplate that contained a grid with a fixed number of fields and then bind to the Colspan value. Unfortunately, this didn't work. I ended up with incorrect or overlapping content anytime I tried to do a column span. Here's the XAML I was working with <ListView ItemsSource="{Binding}" > <ListView.View> <GridView> <GridViewColumn Header="Content"> <GridViewColumn.CellTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Grid.ColumnSpan="{Binding Path=Tokens[0].ColumnSpan}" Text="{Binding Path=Tokens[0].Content}" ></TextBlock> <TextBlock Grid.Column="1" Grid.ColumnSpan="{Binding Path=Tokens[1].ColumnSpan}" Text="{Binding Path=Tokens[1].Content}" ></TextBlock> <TextBlock Grid.Column="2" Text="{Binding Path=Tokens[2].Content}"></TextBlock> </Grid> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </GridView> </ListView.View> And here's the classes I was binding to public class DisplayLine { public LineType Linetype { get; set; } public List<Token> Tokens { get; set; } public DisplayLine() { Tokens = new List<Token>(); } } public class Token { public string Content { get; set; } public bool IsEmpty { get { return string.IsNullOrEmpty(Content); } } public int ColumnSpan { get; set; } public Token() { ColumnSpan = 1; } } Does anyone have any suggestions of maybe a way of making this work. I may be taking the wrong approach. I'm trying to avoid any solutions where I explcitily build something in the code behind as I'm using the MVVM pattern so it has to be something that I can bind from exposed through the controller. My intial plan was to create a factory and separate classes that display the data differently based on type. However, I'm struggling coming up with a strategy for this using MVVM as I really can't just build something and display it. I have toyed with the idea of making some kind of UI service class that is injected, but it would still require some pretty detailed UI information from the controller to do it's work.

    Read the article

  • TSQL Shred XML - Is this right or is there a better way (newbie @ shredding XML)

    - by drachenstern
    Ok, I'm a C# ASP.NET dev following orders: The orders are to take a given dataset, shred the XML and return columns. I've argued that it's easier to do the shredding on the ASP.NET side where we already have access to things like deserializers, etc, and the entire complex of known types, but no, the boss says "shred it on the server, return a dataset, bind the dataset to the columns of the gridview" so for now, I'm doing what I was told. This is all to head off the folks who will come along and say "bad requirements". Task at hand: Here's my code that works and does what I want it to: DECLARE @table1 AS TABLE ( ProductID VARCHAR(10) , Name VARCHAR(20) , Color VARCHAR(20) , UserEntered VARCHAR(20) , XmlField XML ) INSERT INTO @table1 SELECT '12345','ball','red','john','<sizes><size name="medium"><price>10</price></size><size name="large"><price>20</price></size></sizes>' INSERT INTO @table1 SELECT '12346','ball','blue','adam','<sizes><size name="medium"><price>12</price></size><size name="large"><price>25</price></size></sizes>' INSERT INTO @table1 SELECT '12347','ring','red','john','<sizes><size name="medium"><price>5</price></size><size name="large"><price>8</price></size></sizes>' INSERT INTO @table1 SELECT '12348','ring','blue','adam','<sizes><size name="medium"><price>8</price></size><size name="large"><price>10</price></size></sizes>' INSERT INTO @table1 SELECT '23456','auto','black','ann','<auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>3</cylinders></auto>' INSERT INTO @table1 SELECT '23457','auto','black','ann','<auto><type>truck</type><wheels>4</wheels><doors>2</doors><cylinders>8</cylinders></auto><auto><type>car</type><wheels>4</wheels><doors>4</doors><cylinders>6</cylinders></auto>' DECLARE @x XML SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $vehicle in //auto return <auto type = "{$vehicle/type}" wheels = "{$vehicle/wheels}" doors = "{$vehicle/doors}" cylinders = "{$vehicle/cylinders}" />') FROM @table1 table1 WHERE Name = 'auto' FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , VType = T.Item.value('@type' , 'varchar(10)') , Wheels = T.Item.value('@wheels', 'varchar(2)') , Doors = T.Item.value('@doors', 'varchar(2)') , Cylinders = T.Item.value('@cylinders', 'varchar(2)') FROM @x.nodes('//table1/auto') AS T(Item) SELECT @x = ( SELECT ProductID , Name , Color , UserEntered , XmlField.query(' for $object in //sizes/size return <size name = "{$object/@name}" price = "{$object/price}" />') FROM @table1 table1 WHERE Name IN ('ring', 'ball') FOR XML AUTO ) SELECT @x SELECT ProductID = T.Item.value('../@ProductID', 'varchar(10)') , Name = T.Item.value('../@Name', 'varchar(20)') , Color = T.Item.value('../@Color', 'varchar(20)') , UserEntered = T.Item.value('../@UserEntered', 'varchar(20)') , SubName = T.Item.value('@name' , 'varchar(10)') , Price = T.Item.value('@price', 'varchar(2)') FROM @x.nodes('//table1/size') AS T(Item) So for now, I'm trying to figure out if there's a better way to write the code than what I'm doing now... (I have a part 2 I'm about to go key in)

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • How to use objects as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance. After receiving an answer from Apocalisp and trying it. Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to: def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message: error: inferred type arguments [Nothing,C] do not conform to method genRndVar's type parameter bounds [E,C genRndVar(c) Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below: case 0 => new c.Neg(genRndExpr[E,C](c, level+1)) After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error: error: type mismatch; found : C#Expr required: c.Expr case 0 = new c.Neg(genRndExpr[E,C](c, level+1)) So, again, I'm stuck. Any help will be appreciated.

    Read the article

  • How to Bind a selected Item in a Listbox to a ItemsControl and ItemTemplate in WPF and C#

    - by Scott
    All, LowDown: I am trying to create a Document Viewer in WPF. The layout is this: Left side is a full list box. On the right side is a Collection or an Items control. Inside the items control will be a collection of the "selected documents" in the list box. So A user can select multiple items in the list box and for each new item they select, they can add the item to the collection on the right. I want the collection to look like a image gallery that shows up in Google/Bing Image searches. Make sense? The problem I am having is I can't get the WPFPreviewer to bind correctly to the selected item in the list box under the itemscontrol. Side Note: The WPFPreviewer is something Micorosft puts out that allows us to preview documents. Other previewers can be built for all types of documents, but im going basic here until I get this working right. I have been successful in binding to the list box WITHOUT the items control here: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources><Grid Background="Cyan"> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <l:WPFPreviewHandler Content="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}"/> </Grid> Though, once I add in the ItemsControl, I can't get it to work anymore: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources> <Grid> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <ItemsControl x:Name="DocumentViewer" ItemsSource="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}" > <ItemsControl.ItemTemplate> <DataTemplate> <Grid Background="Cyan"> <l:WPFPreviewHandler Content="{Binding Url}"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> Can someone please help me out with trying to bind to the ItemsControl if I select one or even multiple items in the listbox.

    Read the article

  • Rendering a view to a string in MVC, then redirecting -- workarounds?

    - by James S
    Hi -- I can't render a view to a string and then redirect, despite this answer from Feb (after version 1.0, I think) that claims it's possible. I thought I was doing something wrong, and then I read this answer from Haack in July that claims it's not possible. If somebody has it working and can help me get it working, that's great (and I'll post code, errors). However, I'm now at the point of needing workarounds. There are a few, but nothing ideal. Has anybody solved this, or have any comments on my ideas? This is to render email. While I can surely send the email outside of the web request (store info in a db and get it later), there are many types of emails and I don't want to store the template data (user object, a few other LINQ objects) in a db to let it get rendered later. I could create a simpler, serializable POCO and save that in the db, but why? ... I just want rendered text! I can create a new RedirectToAction object that checks if the headers have been sent (can't figure out how to do this -- try/catch?) and, if so, builds out a simple page with a meta redirect, a javascript redirect, and also a "click here" link. Within my controller, I can remember if I've rendered an email and, if so, manually do #2 by displaying a view. I can manually send the redirect headers before any potential email rendering. Then, rather than using the MVC infrastructure to redirecttoaction, I just call result.end. This seems easiest, but really messy. Anything else? EDIT: I've tried Dan's code (very similar to the code from Jan/Feb that I've already tried) and I'm still getting the same error. The only substantial difference I can see is that his example uses a view while I use a partial view. I'll try testing this later with a view. Here's what I've got: Controller public ActionResult Certifications(string email_intro) { //a lot of stuff ViewData["users"] = users; if (isPost()) { //create the viewmodel var view_model = new ViewModels.Emails.Certifications.Open(userContext) { emailIntro = email_intro }; //i've tried stopping this after just one iteration, in case the problem is due to calling it multiple times foreach (var user in users) { if (user.Email_Address.IsValidEmailAddress()) { //add more stuff to the view model specific to this user view_model.user = user; view_model.certification302Summary.subProcessesOwner = new SubProcess_Certifications(RecordUpdating.Role.Owner, null, null, user.User_ID, repository); //more here.... //if i comment out the next line, everything works ok SendEmail(view_model, this.ControllerContext); } } return RedirectToAction("Certifications"); } return View(); } SendEmail() public static void SendEmail(ViewModels.Emails.Certifications.Open model, ControllerContext context) { var vd = context.Controller.ViewData; vd["model"] = model; var renderer = new CustomRenderers(); //i fixed an error in your code here var text = renderer.RenderViewToString3(context, "~/Views/Emails/Certifications/Open.ascx", "", vd, null); var a = text; } CustomRenderers public class CustomRenderers { public virtual string RenderViewToString3(ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { //copy/paste of dan's code } } Error [HttpException (0x80004005): Cannot redirect after HTTP headers have been sent.] System.Web.HttpResponse.Redirect(String url, Boolean endResponse) +8707691 Thanks, James

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

  • JPA IndirectSet changes not reflected in Spring frontend

    - by Jon
    I'm having an issue with Spring JPA and IndirectSets. I have two entities, Parent and Child, defined below. I have a Spring form in which I'm trying to create a new Child and link it to an existing Parent, then have everything reflected in the database and in the web interface. What's happening is that it gets put into the database, but the UI doesn't seem to agree. The two entities that are linked to each other in a OneToMany relationship like so: @Entity @Table(name = "parent", catalog = "myschema", uniqueConstraints = @UniqueConstraint(columnNames = "ChildLinkID")) public class Parent { private Integer id; private String childLinkID; private Set<Child> children = new HashSet<Child>(0); @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @Column(name = "ChildLinkID", unique = true, nullable = false, length = 6) public String getChildLinkID() { return this.childLinkID; } public void setChildLinkID(String childLinkID) { this.childLinkID = childLinkID; } @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "parent") public Set<Child> getChildren() { return this.children; } public void setChildren(Set<Child> children) { this.children = children; } } @Entity @Table(name = "child", catalog = "myschema") public class Child extends private Integer id; private Parent parent; @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "ChildLinkID", referencedColumnName = "ChildLinkID", nullable = false) public Parent getParent() { return this.parent; } public void setParent(Parent parent) { this.parent = parent; } } And of course, assorted simple properties on each of them. Now, the problem is that when I edit those simple properties from my Spring interface, everything works beautifully. I can persist new entities of these types and they'll appear when using the JPATemplate to do a find on, say, all Parents (getJpaTemplate().find("select p from Parent p")) or on individual entities by ID or another property. The problem I'm running into is that now, I'm trying to create a new Child linked to an existing Parent through a link from the Parent's page. Here's the important bits of the Controller (note that I've placed the JPA foo in the controller here to make it clearer; the actual JpaDaoSupport is actually in another class, appropriately tiered): protected Object formBackingObject(HttpServletRequest request) throws Exception { String parentArg = request.getParameter("parent"); int parentId = Integer.parseInt(parentArg); Parent parent = getJpaTemplate().find(Parent.class, parentId); Child child = new Child(); child.setParent(parent); NewChildCommand command = new NewChildCommand(); command.setChild(child); return command; } protected ModelAndView onSubmit(Object cmd) throws Exception { NewChildCommand command = (NewChildCommand)cmd; Child child = command.getChild(); child.getParent().getChildren().add(child); getJpaTemplate().merge(child); return new ModelAndView(new RedirectView(getSuccessView())); } Like I said, I can run through the form and fill in the new values for the Child -- the Parent's details aren't even displayed. When it gets back to the controller, it goes through and saves it to the underlying database, but the interface never reflects it. Once I restart the app, it's all there and populated appropriately. What can I do to clear this up? I've tried to call extra merges, tried refreshes (which gave a transaction exception), everything short of just writing my own database access code. I've made sure that every class has an appropriate equals() and hashCode(), have full JPA debugging on to see that it's making appropriate SQL calls (it doesn't seem to make any new calls to the Child table) and stepped through in the debugger (it's all in IndirectSets, as expected, and between saving and displaying the Parent the object takes on a new memory address). What's my next step?

    Read the article

  • Does this inheritance design belong in the database?

    - by Berryl
    === CLARIFICATION ==== The 'answers' older than March are not answers to the question in this post! Hello In my domain I need to track allocations of time spent on Activities by resources. There are two general types of Activities of interest - ones base on a Project and ones based on an Account. The notion of Project and Account have other features totally unrelated to both each other and capturing allocations of time, and each is modeled as a table in the database. For a given Allocation of time however, it makes sense to not care whether the allocation was made to either a Project or an Account, so an ActivityBase class abstracts away the difference. An ActivityBase is either a ProjectActivity or an AccountingActivity (object model is below). Back to the database though, there is no direct value in having tables for ProjectActivity and AccountingActivity. BUT the Allocation table needs to store something in the column for it's ActivityBase. Should that something be the Id of the Project / Account or a reference to tables for ProjectActivity / Accounting? How would the mapping look? === Current Db Mapping (Fluent) ==== Below is how the mapping currently looks: public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { //mapping.IgnoreProperty(x => x.BusinessId); //mapping.IgnoreProperty(x => x.Description); //mapping.IgnoreProperty(x => x.TotalTime); mapping.IgnoreProperty(x => x.UniqueId); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public void Override(AutoMapping<AccountingActivity> mapping) { mapping.References(x => x.Account); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public void Override(AutoMapping<ProjectActivity> mapping) { mapping.References(x => x.Project); } } There are two odd smells here. Firstly, the inheritance chain adds nothing in the way of properties - it simply adapts Projects and Accounts into a common interface so that either can be used in an Allocation. Secondly, the properties in the ActivityBase interface are redundant to keep in the db, since that information is available in Projects and Accounts. Cheers, Berryl ==== Domain ===== public class Allocation : Entity { ... public virtual ActivityBase Activity { get; private set; } ... } public abstract class ActivityBase : Entity { public virtual string BusinessId { get; protected set; } public virtual string Description { get; protected set; } public virtual ICollection<Allocation> Allocations { get { return _allocations.Values; } } public virtual TimeQuantity TotalTime { get { return TimeQuantity.Hours(Allocations.Sum(x => x.TimeSpent.Amount)); } } } public class ProjectActivity : ActivityBase { public virtual Project Project { get; private set; } public ProjectActivity(Project project) { BusinessId = project.Code.ToString(); Description = project.Description; Project = project; } }

    Read the article

  • DataTable to JSON

    - by Joel Coehoorn
    I recently needed to serialize a datatable to JSON. Where I'm at we're still on .Net 2.0, so I can't use the JSON serializer in .Net 3.5. I figured this must have been done before, so I went looking online and found a number of different options. Some of them depend on an additional library, which I would have a hard time pushing through here. Others require first converting to List<Dictionary<>>, which seemed a little awkward and needless. Another treated all values like a string. For one reason or another I couldn't really get behind any of them, so I decided to roll my own, which is posted below. As you can see from reading the //TODO comments, it's incomplete in a few places. This code is already in production here, so it does "work" in the basic sense. The places where it's incomplete are places where we know our production data won't currently hit it (no timespans or byte arrays in the db). The reason I'm posting here is that I feel like this can be a little better, and I'd like help finishing and improving this code. Any input welcome. public static class JSONHelper { public static string FromDataTable(DataTable dt) { string rowDelimiter = ""; StringBuilder result = new StringBuilder("["); foreach (DataRow row in dt.Rows) { result.Append(rowDelimiter); result.Append(FromDataRow(row)); rowDelimiter = ","; } result.Append("]"); return result.ToString(); } public static string FromDataRow(DataRow row) { DataColumnCollection cols = row.Table.Columns; string colDelimiter = ""; StringBuilder result = new StringBuilder("{"); for (int i = 0; i < cols.Count; i++) { // use index rather than foreach, so we can use the index for both the row and cols collection result.Append(colDelimiter).Append("\"") .Append(cols[i].ColumnName).Append("\":") .Append(JSONValueFromDataRowObject(row[i], cols[i].DataType)); colDelimiter = ","; } result.Append("}"); return result.ToString(); } // possible types: // http://msdn.microsoft.com/en-us/library/system.data.datacolumn.datatype(VS.80).aspx private static Type[] numeric = new Type[] {typeof(byte), typeof(decimal), typeof(double), typeof(Int16), typeof(Int32), typeof(SByte), typeof(Single), typeof(UInt16), typeof(UInt32), typeof(UInt64)}; // I don't want to rebuild this value for every date cell in the table private static long EpochTicks = new DateTime(1970, 1, 1).Ticks; private static string JSONValueFromDataRowObject(object value, Type DataType) { // null if (value == DBNull.Value) return "null"; // numeric if (Array.IndexOf(numeric, DataType) > -1) return value.ToString(); // TODO: eventually want to use a stricter format // boolean if (DataType == typeof(bool)) return ((bool)value) ? "true" : "false"; // date -- see http://weblogs.asp.net/bleroy/archive/2008/01/18/dates-and-json.aspx if (DataType == typeof(DateTime)) return "\"\\/Date(" + new TimeSpan(((DateTime)value).ToUniversalTime().Ticks - EpochTicks).TotalMilliseconds.ToString() + ")\\/\""; // TODO: add Timespan support // TODO: add Byte[] support //TODO: this would be _much_ faster with a state machine // string/char return "\"" + value.ToString().Replace(@"\", @"\\").Replace(Environment.NewLine, @"\n").Replace("\"", @"\""") + "\""; } }

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • Ruby Gem LoadError mysql2/mysql2 required

    - by Kalli Dalli
    Im trying to setup my rails server on OSX 10.8 but I can't get my rails server to run. - Currently Im using a Zend Server with mysql 5.1. - I also have istalled brew and brew mysql. - And I used: gem install mysql2 -- --srcdir=/usr/local/mysql/include --with-opt-include=/usr/local/mysql/include the server worked already but now, I always get this loadError below. This is what my Gemfile says: ralphs-macbook-pro:admin-mockup zero$ bundle install Using rake (10.0.2) Using i18n (0.6.1) Using multi_json (1.3.7) Using activesupport (3.2.7) Using builder (3.0.4) Using activemodel (3.2.7) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.2) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Using actionpack (3.2.7) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.12) Using mail (2.4.4) Using actionmailer (3.2.7) Using arel (3.0.2) Using tzinfo (0.3.35) Using activerecord (3.2.7) Using activeresource (3.2.7) Using annotate (2.5.0) Using coffee-script-source (1.4.0) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.7.5) Using rdoc (3.12) Using thor (0.16.0) Using railties (3.2.7) Using coffee-rails (3.2.2) Using columnize (0.3.6) Using debugger-ruby_core_source (1.1.5) Using debugger-linecache (1.1.2) Using debugger (1.2.2) Using formtastic (2.2.1) Using haml (3.1.7) Using haml-rails (0.3.5) Using hirb (0.7.0) Using hpricot (0.8.6) Using jquery-rails (2.1.4) Using kgio (2.7.4) Using mysql2 (0.3.11) Using php_serialize (1.2) Using polyamorous (0.5.0) Using rabl (0.7.8) Using railroady (1.1.0) Using bundler (1.2.3) Using rails (3.2.7) Using raindrops (0.10.0) Using randumb (0.3.0) Using sass (3.2.3) Using sass-rails (3.2.5) Using squeel (1.0.13) Using uglifier (1.3.0) Using unicorn (4.4.0) Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed. And after starting rails s /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `require': cannot load such file -- mysql2/mysql2 (LoadError) from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/mysql2-0.3.11/lib/mysql2.rb:9:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:66:in `block in require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `each' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler/runtime.rb:55:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/bundler-1.2.3/lib/bundler.rb:128:in `require' from /Users/zero/GitHub/admin-mockup/config/application.rb:7:in `<top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `require' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `tap' from /Users/zero/.rvm/gems/ruby-1.9.3-p327/gems/railties-3.2.7/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' Thx for any help!

    Read the article

  • Socket in C: recv overwrite a char[]

    - by Possa
    Hi all, I'm trying to make a little client-server script like many others that I've done in the past. But in this one I have a problem. It is better if I post the code and the output it give me. Code: #include <mysql.h> //not important now #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <string.h> //constant definition #define SERVER_PORT 2121 #define LINESIZE 21 //global var definition char victim_ip[LINESIZE], file_write[LINESIZE], hacker_ip[LINESIZE]; //function void leggi (int); //not use now for debugging purpose //void scriviDB (); //not important now main () { int sock, client_len, fd; struct sockaddr_in server, client; // transport end point if((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("system call socket fail"); exit(1); } server.sin_family = AF_INET; server.sin_addr.s_addr = inet_addr("10.10.10.1"); server.sin_port = htons(SERVER_PORT); // binding address at transport end point if (bind(sock, (struct sockaddr *)&server, sizeof server) == -1) { perror("system call bind fail"); exit(1); } //fprintf(stderr, "Server open: listening.\n"); listen(sock, 5); /* managae client connection */ while (1) { client_len = sizeof(client); if ((fd = accept(sock, (struct sockaddr *)&client, &client_len)) < 0) { perror("accepting connection"); exit(1); } strcpy(hacker_ip, inet_ntoa(client.sin_addr)); printf("1 %s\n", hacker_ip); //debugging purpose //leggi(fd); ////////////////////////// //receive client recv(fd, victim_ip, LINESIZE, 0); victim_ip[sizeof(victim_ip)] = '\0'; printf("2 %s\n", hacker_ip); //debugging purpose recv(fd, file_write, LINESIZE, 0); file_write[sizeof(file_write)] = '\0'; printf("3 %s\n", hacker_ip); //debugging purpose printf("%s@%s for %s\n", file_write, victim_ip, hacker_ip); //send to client send(fd, hacker_ip, 40, 0); //now is hacker_ip for debug ///////////////////////// close(fd); }//end while exit(0); } //end main Client send string: ./send -i 10.10.10.4 -f filename.ext so the script send -i (IP) and -f (FILE) at the server. Here's my output server side: 1 10.10.10.6 2 10.10.10.6 3 [email protected] for As you can see the printf(3) and the printf(ip,file,ip) fail. I don't know how and where but someone overwrite my hacker_ip string. Thanks for your help! :)

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • JAXB: Unmarshalling does not always populate certain classes?

    - by user278458
    Hello, I have a JAXB class generation problem I was hoping to get some help with. Here's the part of the XML that is the source of my problem... Code: <xs:complexType name="IDType"> <xs:choice minOccurs="0" maxOccurs="2"> <xs:element name="DriversLicense" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="SSN" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="CompanyID" minOccurs="0" maxOccurs="1" type="an..35" /> </xs:choice> </xs:complexType> <xs:simpleType name="an..35"> <xs:restriction base="an"> <xs:maxLength value="35" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="an"> <xs:restriction base="xs:string"> <xs:pattern value="[ !-~]*" /> </xs:restriction> </xs:simpleType> ...now this will generate JAXBElement types due the the "choice" with a "maxOccurs 1" . I want to avoid those, so I did that by modifying the code to use a "Wrapper" element and move the maxOccurs up to a sequence tag as follows... Code: <xs:complexType name="IDType"> <xs:sequence maxOccurs="2"> <xs:element name=Wrapper> <xs:complexType> <xs:choice> <xs:element name="DriversLicense" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="SSN" minOccurs="0" maxOccurs="1" type="an..35" /> <xs:element name="CompanyID" minOccurs="0" maxOccurs="1" type="an..35" /> </xs:choice> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> <xs:simpleType name="an..35"> <xs:restriction base="an"> <xs:maxLength value="35" /> </xs:restriction> </xs:simpleType> <xs:simpleType name="an"> <xs:restriction base="xs:string"> <xs:pattern value="[ !-~]*" /> </xs:restriction> </xs:simpleType> For class generating, looks like it works great - the JAXB element is replaced with a list of wrappers as String (i.e. List ) and compiles fine. However, when I unmarshall the actual XML data into the generated classes the data in the wrapper class is not populated - yet JAXB does not throw an exception. My question is: Do I need to change the schema a different way to make this work? Or is there something I can add/change/delete to the generated code or annotations? Appreciate any help you can offer! Thanks.

    Read the article

  • Explicitly instantiating a generic member function of a generic structure

    - by Dennis Zickefoose
    I have a structure with a template parameter, Stream. Within that structure, there is a function with its own template parameter, Type. If I try to force a specific instance of the function to be generated and called, it works fine, if I am in a context where the exact type of the structure is known. If not, I get a compile error. This feels like a situation where I'm missing a typename, but there are no nested types. I suspect I'm missing something fundamental, but I've been staring at this code for so long all I see are redheads, and frankly writing code that uses templates has never been my forte. The following is the simplest example I could come up with that illustrates the issue. #include <iostream> template<typename Stream> struct Printer { Stream& str; Printer(Stream& str_) : str(str_) { } template<typename Type> Stream& Exec(const Type& t) { return str << t << std::endl; } }; template<typename Stream, typename Type> void Test1(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); /****** vvv This is the line the compiler doesn't like vvv ******/ out.Exec<bool>(t); /****** ^^^ That is the line the compiler doesn't like ^^^ ******/ } template<typename Type> void Test2(const Type& t) { Printer<std::ostream> out = Printer<std::ostream>(std::cout); out.Exec<bool>(t); } template<typename Stream, typename Type> void Test3(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); out.Exec(t); } int main() { Test2(5); Test3(std::cout, 5); return 0; } As it is written, gcc-4.4 gives the following: test.cpp: In function 'void Test1(Stream&, const Type&)': test.cpp:22: error: expected primary-expression before 'bool' test.cpp:22: error: expected ';' before 'bool' Test2 and Test3 both compile cleanly, and if I comment out Test1 the program executes, and I get "1 5" as I expect. So it looks like there's nothing wrong with the idea of what I want to do, but I've botched something in the implementation. If anybody could shed some light on what I'm overlooking, it would be greatly appreciated.

    Read the article

  • Setting value for autocomplete search field linked to Google Places API

    - by user1653350
    I have a web page where people will be able to enter multiple destinations. When they state they want to enter a new destination, the current field values are stored in arrays. If they choose to go back to a previous destination, the relevant values are reinserted into the form fields. I am using the search field linked to autocomplete as the visible display of the destination. When I attempt to put a value into the linked search field, the value is presented as if it is a placeholder instead of a value. Enter the field and the value is removed by the onFocus() event of the Google Places autocomplete add-in. How can I reinsert the value and have it recognised as a value instead of placeholder. field definition in the form <label for="GoogleDestSrch" class="inputText">Destination: <span id="DestinationDisplay2">1</span> <span class="required"><font size="5"> * </font></span></label> <input id="GoogleDestSrch" type="text" size="50" placeholder="Please enter your destination" /> initialise code for Google Places API listener var input = document.getElementById('GoogleDestSrch'); var autocomplete = new google.maps.places.Autocomplete(input); google.maps.event.addListener(autocomplete, 'place_changed', function() { fillInAddress(); }); attempting to reinsert value into search field when prior destination reloaded form.GoogleDestSrch.value = GoogleDestSrch[index]; Issue With Google Places <script language="JavaScript" type="text/javascript"> function GotoDestination(index) { var domove = true; if (index == 0) { index = lastIndex + 1; } else { if (index == -1) { index = lastIndex - 1; if (index == 0) { index = 1; domove = false; } } } if (domove) { if (index != lastIndex) { var doc = window.document; var pdbutton = doc.getElementById("pdbutton"); var pdbutton1 = doc.getElementById("pdbutton1"); if ((index > lastIndex)) { // move to next destination saveDataF(lastIndex); loadDataF(index); lastIndex = index; } else if (index <= lastIndex) { // move to previous destination saveDataF(lastIndex); loadDataF(index); lastIndex = index; } } } } var input; var autocomplete; // fill in the Google metadata when a destination is selected function fillInAddress() { var strFullValue = ''; var strFullGeoValue = ''; var place = autocomplete.getPlace(); document.getElementById("GoogleType").value = place.types[0]; } function saveDataF(index) { var fieldValue; var blankSearch = "Please enter"; // placeholder text for Google Places fieldValue = document.getElementById("GoogleDestSrch").value; if (fieldValue.indexOf(blankSearch) > -1) { fieldValue = ""; } GoogleDestSrch[index] = fieldValue; } function loadDataF(index) { if ((GoogleDestSrch[index] + "") == "undefined") { document.getElementById("GoogleDestSrch").value = ""; } else { document.getElementById("GoogleDestSrch").value = GoogleDestSrch[index]; } } // -- Destination: 1 * Type of place // input = document.getElementById('GoogleDestSrch'); autocomplete = new google.maps.places.Autocomplete(input); google.maps.event.addListener(autocomplete, 'place_changed', function () { fillInAddress(); }); //]]

    Read the article

  • Function signature-like expressions as C++ template arguments

    - by Jeff Lee
    I was looking at Don Clugston's FastDelegate mini-library and noticed a weird syntactical trick with the following structure: TemplateClass< void( int, int ) > Object; It almost appears as if a function signature is being used as an argument to a template instance declaration. This technique (whose presence in FastDelegate is apparently due to one Jody Hagins) was used to simplify the declaration of template instances with a semi-arbitrary number of template parameters. To wit, it allowed this something like the following: // A template with one parameter template<typename _T1> struct Object1 { _T1 m_member1; }; // A template with two parameters template<typename _T1, typename _T2> struct Object2 { _T1 m_member1; _T2 m_member2; }; // A forward declaration template<typename _Signature> struct Object; // Some derived types using "function signature"-style template parameters template<typename _Dummy, typename _T1> struct Object<_Dummy(_T1)> : public Object1<_T1> {}; template<typename _Dummy, typename _T1, typename _T2> struct Object<_Dummy(_T1, _T2)> : public Object2<_T1, _T2> {}; // A. "Vanilla" object declarations Object1<int> IntObjectA; Object2<int, char> IntCharObjectA; // B. Nifty, but equivalent, object declarations typedef void UnusedType; Object< UnusedType(int) > IntObjectB; Object< UnusedType(int, char) > IntCharObjectB; // C. Even niftier, and still equivalent, object declarations #define DeclareObject( ... ) Object< UnusedType( __VA_ARGS__ ) > DeclareObject( int ) IntObjectC; DeclareObject( int, char ) IntCharObjectC; Despite the real whiff of hackiness, I find this kind of spoofy emulation of variadic template arguments to be pretty mind-blowing. The real meat of this trick seems to be the fact that I can pass textual constructs like "Type1(Type2, Type3)" as arguments to templates. So here are my questions: How exactly does the compiler interpret this construct? Is it a function signature? Or, is it just a text pattern with parentheses in it? If the former, then does this imply that any arbitrary function signature is a valid type as far as the template processor is concerned? A follow-up question would be that since the above code sample is valid code, why doesn't the C++ standard just allow you to do something like the following, which is does not compile? template<typename _T1> struct Object { _T1 m_member1; }; // Note the class identifier is also "Object" template<typename _T1, typename _T2> struct Object { _T1 m_member1; _T2 m_member2; }; Object<int> IntObject; Object<int, char> IntCharObject;

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • Better, simpler example of 'semantic conflict'?

    - by rhubbarb
    I like to distinguish three different types of conflict from a version control system (VCS): textual syntactic semantic A textual conflict is one that is detected by the merge or update process. This is flagged by the system. A commit of the result is not permitted by the VCS until the conflict is resolved. A syntactic conflict is not flagged by the VCS, but the result will not compile. Therefore this should also be picked up by even a slightly careful programmer. (A simple example might be a variable rename by Left and some added lines using that variable by Right. The merge will probably have an unresolved symbol. Alternatively, this might introduce a semantic conflict by variable hiding.) Finally, a semantic conflict is not flagged by the VCS, the result compiles, but the code may have problems running. In mild cases, incorrect results are produced. In severe cases, a crash could be introduced. Even these should be detected before commit by a very careful programmer, through either code review or unit testing. My example of a semantic conflict uses SVN (Subversion) and C++, but those choices are not really relevant to the essence of the question. The base code is: int i = 0; int odds = 0; while (i < 10) { if ((i & 1) != 0) { odds *= 10; odds += i; } // next ++ i; } assert (odds == 13579) The Left (L) and Right (R) changes are as follows. Left's 'optimisation' (changing the values the loop variable takes): int i = 1; // L int odds = 0; while (i < 10) { if ((i & 1) != 0) { odds *= 10; odds += i; } // next i += 2; // L } assert (odds == 13579) Right's 'optimisation' (changing how the loop variable is used): int i = 0; int odds = 0; while (i < 5) // R { odds *= 10; odds += 2 * i + 1; // R // next ++ i; } assert (odds == 13579) This is the result of a merge or update, and is not detected by SVN (which is correct behaviour for the VCS). int i = 1; // L int odds = 0; while (i < 5) // R { odds *= 10; odds += 2 * i + 1; // R // next i += 2; // L } assert (odds == 13579) The assert fails because odds is 37. So my question is as follows. Is there a simpler example than this? Is there a simple example where the compiled executable has a new crash? As a secondary question, are there cases of this that you have encountered in real code? Again, simple examples are especially welcome.

    Read the article

  • jQuery custom validation for a selected radio selection

    - by Kaushik Gopal
    Hey peeps, This is my requirement: I have a bunch of radio box selections (types of workflows). If one of the radios are selected(i.e one particular type of workflow selected), I want to run a custom validation on that. This is what i tried, but it's not behaving well. Any help? jQuery part: $(document).ready(function() { // this part is to expand the child radio selection if particular parent workflow selected $("#concernedDirectorChoice").hide(); $("input[name^=workflowChoice]").change( function (){ if($(this).attr("class")=='chooseDir'){ $("#concernedDirectorChoice").show(); }else{ $("#concernedDirectorChoice").hide(); } }); // FORM VALIDATION $.validator.addMethod("dirRequired", function(value, element) { return this.optional(element) || ($("input[name^=rdDir:checked]").length); }, "That particular workflow requires a Director to be chosen. Please select Director"); $("#contExpInitiateForm").validate({ debug:true ,rules:{ RenewalNo: {required: true, number: true}, chooseDir: {dirRequired: true}, workflowChoice: {required: true} } ,errorPlacement: function(error, element) { $('.errorMessageBox').text(error.html()); } }); }); HTML form part: <!-- Pick type of workflow --> <table class="hr-table" > <tr> <td class="hr-table-label " colspan=2 >Pick Workflow Type</td> </tr> <tr> <td> <input type="radio" name="workflowChoice" value="1"> </input> </td> <td> Workflow 1 </td> </tr> <tr> <td> <input type="radio" name="workflowChoice" value="2" class="chooseDir"> </input> </td> <td> Workflow 2 (Dir selection required) </td> </tr> <tr> <td> <input type="radio" name="workflowChoice" value="3"> </input> </td> <td> Workflow 3 </td> </tr> </table> <!-- Pick Director for Workflow type 2 --> <table id="concernedDirectorChoice" name="concernedDirectorChoice" > <tr><td class="hr-table-label" colspan=2 > Choose Concerned Director</td></tr> <tr> <td><input type="radio" value='Dir1' name="rdDir" /></td> <td>Director 1</td> </tr> <tr> <td><input type="radio" value='Dir2' name="rdDir" /></td> <td>Director 2</td> </tr> <tr> <td><input type="radio" value='Dir3' name="rdDir" /></td> <td>Director 3</td> </tr> </table>

    Read the article

  • N-tier Repository POCOs - Aggregates?

    - by Sam
    Assume the following simple POCOs, Country and State: public partial class Country { public Country() { States = new List<State>(); } public virtual int CountryId { get; set; } public virtual string Name { get; set; } public virtual string CountryCode { get; set; } public virtual ICollection<State> States { get; set; } } public partial class State { public virtual int StateId { get; set; } public virtual int CountryId { get; set; } public virtual Country Country { get; set; } public virtual string Name { get; set; } public virtual string Abbreviation { get; set; } } Now assume I have a simple respository that looks something like this: public partial class CountryRepository : IDisposable { protected internal IDatabase _db; public CountryRepository() { _db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]); } public IEnumerable<Country> GetAll() { return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null); } public Country Get(object id) { return _db.SingleById(id); } public void Add(Country c) { _db.Insert(c); } /* ...And So On... */ } Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this: public partial class CountryListVM { [Key] public int CountryId { get; set; } public string Name { get; set; } public string CountryCode { get; set; } public int StateCount { get; set; } } When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this: IList<CountryListVM> list = db.Countries .OrderBy(c => c.Name) .Select(c => new CountryListVM() { CountryId = c.CountryId, Name = c.Name, CountryCode = c.CountryCode, StateCount = c.States.Count }) .ToList(); But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to: Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed. -or- Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs. -or- Are there other options? How are you handling these types of things?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >