Search Results

Search found 11993 results on 480 pages for 'define syntax'.

Page 378/480 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • ActiveX control loading but not activating correctly (only when using in Reg Free COM)

    - by embnut
    I have an ActiveX control (created using C#) that I am adding to a form in Visual FoxPro using late binding. It works without problems when I register the control. I want to use reg free COM and created necessary manifest files. Now it load and displays in an inactive state until I double click or programatically activate it. I don't think it has anything to do with the reg free com manifest files. However is there something I need to do to set it up before/after making the late binding call AddObject()? this.AddObject('OleControl1', 'oleControl', 'SomeCompany.SomeOleControl') When I check the OleTypeAllowed Property of the OleControl created by AddObject() it is 1 (Embedded OLE object) instead of -2 (ActiveX object). So the OleControl got instantiated to the wrong type. I also tried the following: DEFINE(d) a subclass of OleControl and set the property OleTypeAllowed = -2. Used late binding to load the control. It did not work as required. The OleTypeAllowed came back as 1 Registered the ActiveX control. Added the ActiveX control to the project as a subclass using the visual editor. Unregistered the control. Used late binding to load the control. It did not work as required. The OleTypeAllowed came back as 1. Is it possible to load the OleControl as a ActiveX control? Any input from VB that I can convert to FoxPro would also be appreciated.

    Read the article

  • Matplotlib canvas drawing

    - by Morgoth
    Let's say I define a few functions to do certain matplotlib actions, such as def dostuff(ax): ax.scatter([0.],[0.]) Now if I launch ipython, I can load these functions and start a new figure: In [1]: import matplotlib.pyplot as mpl In [2]: fig = mpl.figure() In [3]: ax = fig.add_subplot(1,1,1) In [4]: run functions # run the file with the above defined function If I now call dostuff, then the figure does not refresh: In [6]: dostuff(ax) I have to then explicitly run: In [7]: fig.canvas.draw() To get the canvas to draw. Now I can modify dostuff to be def dostuff(ax): ax.scatter([0.],[0.]) ax.get_figure().canvas.draw() This re-draws the canvas automatically. But now, say that I have the following code: def dostuff1(ax): ax.scatter([0.],[0.]) ax.get_figure().canvas.draw() def dostuff2(ax): ax.scatter([1.],[1.]) ax.get_figure().canvas.draw() def doboth(ax): dostuff1(ax) dostuff2(ax) ax.get_figure().canvas.draw() I can call each of these functions, and the canvas will be redrawn, but in the case of doboth(), it will get redrawn multiple times. My question is: how could I code this, such that the canvas.draw() only gets called once? In the above example it won't change much, but in more complex cases with tens of functions that can be called individually or grouped, the repeated drawing is much more obvious, and it would be nice to be able to avoid it. I thought of using decorators, but it doesn't look as though it would be simple. Any ideas?

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Atlas style map index for static google map

    - by Ben Holland
    Hello, I'm using a static google map, but really this problem could apply to any maps project. I want to divide a map into multiple quadrants (of say 50x50 pixels) and label the columns as A, B, C.... and the rows as 1, 2, 3... Next I plan to do something like, 1) Find the markers which are the farthest north, east, south, and west 2) Use that info to to define the bounding boxes of each row and column box 3) Classify each marker by its row and column (Example Marker 1 = [A,2]) A few requirements, I don't know the zoom level because I let Google set the zoom level appropriately for me and I would rather not use an algorithm that is dependent on a zoom level. I do however know the locations of all of the markers that are shown on the map. Here is an example of a map that I would like to classify the markers for, static map example link. I found these which look like a good start, Resource 1, Resource 2 But I think I'm still in need of some help getting started. Can anyone help write out some pseudo code or post a few more resources? I'm kind of in a rut at the moment. Thanks! Much appreciated of any help!

    Read the article

  • F# Class with Generics : 'constructor deprecated' error

    - by akaphenom
    I am trying to create a a class that will store a time series of data - organized by groups, but I had some compile errors so I stripped down to the basics (just a simple instantiation) and still can't overcome the compile error. I was hoping some one may have seen this issue before. Clas is defined as: type TimeSeriesQueue<'V, 'K when 'K: comparison> = class val private m_daysInCache: int val private m_cache: Map<'K, 'V list ref > ref; val private m_getKey: ('V -> 'K) ; private new(getKey) = { m_cache = ref Map.empty m_daysInCache = 7 ; m_getKey = getKey ; } end So that looks OK to me (it may not be, but doesnt have any errors or warnings) - the instantiation gets the error: type tempRec = { someKey: string ; someVal1: int ; someVal2: int ; } let keyFunc r:tempRec = r.someKey // error occurs on the following line let q = new TimeSeriesQueue<tempRec, string> keyFunc This construct is deprecated: The use of the type syntax 'int C' and 'C ' is not permitted here. Consider adjusting this type to be written in the form 'C' NOTE This may be simple stupidity - I am just getting back from holiday and my brain is still on time zone lag...

    Read the article

  • Ajax call to parent window after form submission

    - by David
    Hi all, Pardon the complicated title. Here's my situation: I'm working on a Grails app, and using jQuery for some of the more complex UI stuff. The way the system is set up, I have an item, which can have various files (user-supplied) associated with it. On my Item/show view, there is a link to add a file. This link pops up a jQuery modal dialog, which displays my file upload form (a remote .gsp). So, the user selects the file and enters a comment, and when the form is submitted, the dialog gets closed, and the list of files on the Item/show view is refreshed. I was initially accomplishing this by adding onclick="javascript:window.parent.$('#myDialog').dialog('close');" to my submit button. This worked fine, but when submitting some larger files, I end up with a race condition where the dialog closes and the file list is refreshed before the new file is saved, and so the list of files is out of date (the file still gets saved properly). So my question is, what is the best way to ensure that the dialog is not closed until after the form submit operation completes? I've tried using the <g:formRemote tag in Grails, and closing the dialog in the "after" attribute (according to the Grails docs, the script is called after form submission), but I receive an error (taken from FireBug) stating that window.parent.$('#myDialog').dialog is not a function Is this a simple JavaScript/Grails syntax issue that I'm missing here, or am I going about this entirely wrong? Thanks so much for your time and assistance!

    Read the article

  • How do I compose an existing Moose role into a class at runtime?

    - by Oesor
    Say I define an abstract My::Object and concrete role implementations My::Object::TypeA and My::Object::TypeB. For maintainability reasons, I'd like to not have a hardcoded table that looks at the object type and applies roles. As a DWIMmy example, I'm looking for something along these lines in My::Object: has => 'id' (isa => 'Str', required => 1); sub BUILD { my $self = shift; my $type = $self->lookup_type(); ## Returns 'TypeB' {"My::Object::$type"}->meta->apply($self); } Letting me get a My::Object with My::Object::TypeB role applied by doing the following: my $obj = My::Object(id = 'foo') Is this going to do what I want or am I on the entirely wrong track? Edit: I simplified this too much; I don't want to have to know the type when I instantiate the object, I want the object to determine its type and apply the correct role's methods appropriately. I've edited my question to make this clearer.

    Read the article

  • Asp.Net MVC 2: How exactly does a view model bind back to the model upon post back?

    - by Dr. Zim
    Sorry for the length, but a picture is worth 1000 words: In ASP.NET MVC 2, the input form field "name" attribute must contain exactly the syntax below that you would use to reference the object in C# in order to bind it back to the object upon post back. That said, if you have an object like the following where it contains multiple Orders having multiple OrderLines, the names would look and work well like this (case sensitive): This works: Order[0].id Order[0].orderDate Order[0].Customer.name Order[0].Customer.Address Order[0].OrderLine[0].itemID // first order line Order[0].OrderLine[0].description Order[0].OrderLine[0].qty Order[0].OrderLine[0].price Order[0].OrderLine[1].itemID // second order line, same names Order[0].OrderLine[1].description Order[0].OrderLine[1].qty Order[0].OrderLine[1].price However we want to add order lines and remove order lines at the client browser. Apparently, the indexes must start at zero and contain every consecutive index number to N. The black belt ninja Phil Haack's blog entry here explains how to remove the [0] index, have duplicate names, and let MVC auto-enumerate duplicate names with the [0] notation. However, I have failed to get this to bind back using a nested object: This fails: Order.id // Duplicate names should enumerate at 0 .. N Order.orderDate Order.Customer.name Order.Customer.Address Order.OrderLine.itemID // And likewise for nested properties? Order.OrderLine.description Order.OrderLine.qty Order.OrderLine.price Order.OrderLine.itemID Order.OrderLine.description Order.OrderLine.qty Order.OrderLine.price I haven't found any advice out there yet that describes how this works for binding back nested ViewModels on post. Any links to existing code examples or strict examples on the exact names necessary to do nested binding with ILists? Steve Sanderson has code that does this sort of thing here, but we cannot seem to get this to bind back to nested objects. Anything not having the [0]..[n] AND being consecutive in numbering simply drops off of the return object. Any ideas?

    Read the article

  • How to change identifier quote character in SSIS for connection to ODBC DSN

    - by William Rose
    I'm trying to create an SSIS 2008 Data Source View that reads from an Ingres database via the ODBC driver for Ingres. I've downloaded the Ingres 10 Community Edition to get the ODBC driver, installed it, set up the data access server and a DSN on the server running SSIS. If I connect to the SQL Server 2008 Database Engine on the server running SSIS, I can retrieve data from Ingres over the ODBC DSN by running the following command: SELECT * FROM OPENROWSET( 'MSDASQL' , 'DSN=IngresODBC;UID=testuser;PWD=testpass' , 'SELECT * FROM iitables') So I am quite sure that the ODBC setup is correct. If I try the same query with SQL Server style bracketed identifier quotes, I get an error, as Ingres doesn't support this syntax. SELECT * FROM OPENROWSET( 'MSDASQL' , 'DSN=IngresODBC;UID=testuser;PWD=testpass' , 'SELECT * FROM [iitables]') The error is "[Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 1, Unexpected character '['.". What I am finding is that I get the same error when I try to add tables from Ingres to an SSIS Data Source View. The initial step of selecting the ODBC Provider works fine, and I am shown a list of tables / views to add. I then select any table, and try to add it to the view, and get "ERROR [5000A] [Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 3, Unexpected character '['.". Following Ed Harper's suggestion of creating a named query also seems to be stymied. If I put into my named query the following text: SELECT * FROM "iitables" I still get an error: "ERROR [5000A] [Ingres][Ingres 10.0 ODBC Driver][Ingres 10.0]line 2, Unexpected character '['". According to the error, the query text passed by SSIS to ODBC was: SELECT [iitables].* FROM ( SELECT * FROM "iitables" ) AS [iitables] It seems that SSIS assumes that bracket quote characters are acceptable, when they aren't. How can I persuade it not to use them? Double quotes are acceptable.

    Read the article

  • Ruby on Rails / Yellow Maps For Ruby Plugin woes...

    - by Zach
    Okay I've read through the plugin comments and the docs as well and I have yet to come up with an answer as to how to do this. Here's my problem I want to use the :info_window_tabs and the :icon option, but I don't know what format to pass my information in. According to the documentation the following code should be correct. Here's my code: @mapper.overlay_init(GMarker.new([map.lat, map.lng], :title => map.name, :info_window_tabs => [ {:tab => "HTML", :content => @marker_html}, {:tab => "Attachments", :content => "stuff"}], :icon => { :image => "../images/icon.png" })) The readme and documentation can be viewed here. And the relevant ruby file that I am trying to interact with, including the author's comments, can be viewed here. I have tried the #rubyonrails channel in IRC as well as emailing the author directly and reporting an issue at GitHub. It really is just a question of syntax. Thanks!

    Read the article

  • Reference a Map by name within Velocity Template

    - by Wiretap
    Pretty sure there is an easy answer to this, but just can't find the right VTL syntax. In my context I'm passing a Map which contains other Maps. I'd like to reference these inner maps by name and assign them within my template. The inner maps are constructed by different parts of the app, and then added to the context by way of example public static void main( String[] args ) throws Exception { VelocityEngine ve = new VelocityEngine(); ve.init(); Template t = ve.getTemplate( "test.vm" ); VelocityContext context = new VelocityContext(); Map<String,Map<String,String>> messageData = new HashMap<String, Map<String,String>>(); Map<String,String> data_map = new HashMap<String,String>(); data_map.put("data_1","1234"); data_map.put("a_date", "31-Dec-2009"); messageData.put("inner_map", data_map); context.put("msgData", messageData); StringWriter writer = new StringWriter(); t.merge( context, writer ); System.out.println( writer.toString() ); } Template - test.vm #set ($in_map = $msgData.get($inner_map) ) data: $in_map.data_1 $in_map.a_date

    Read the article

  • setAttribute, onClick and cross browser compatability...

    - by Nicholas Kreidberg
    I have read a number of posts about this but none with any solid answer. Here is my code: // button creation onew = document.createElement('input'); onew.setAttribute("type", "button"); onew.setAttribute("value", "hosts"); onew.onclick = function(){fnDisplay_Computers("'" + alines[i] + "'"); }; // ie onew.setAttribute("onclick", "fnDisplay_Computers('" + alines[i] + "')"); // mozilla odiv.appendChild(onew); Now, the setAttribute() method (with the mozilla comment) works fine in mozilla but only if it comes AFTER the line above it. So in other words it seems to just default to whichever gets set last. The .onclick method (with the ie comment) does not work in either case, am I using it incorrectly? Either way I can't find a way to make this work at all in IE, let alone in both. I did change the function call when using the .onclick method and it worked fine using just a simple call to an alert function which is why I believe my syntax is incorrect. Long story short, I can't get the onclick parameter to work consistently between IE/Mozilla. -- Nicholas

    Read the article

  • XML Schema and element with type string

    - by svick
    I'm trying to create an XML Schema and I'm stuck. It seems I can't define an element with string content. What am I doing wrong? Schema: <?xml version="1.0"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://ponderka.wz.cz/MusicLibrary0" targetNamespace="http://ponderka.wz.cz/MusicLibrary0"> <xs:element name="music-library"> <xs:complexType> <xs:sequence> <xs:element name="artist" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Document: <?xml version="1.0"?> <music-library xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://ponderka.wz.cz/MusicLibrary0 data0.xsd" xmlns="http://ponderka.wz.cz/MusicLibrary0"> <artist>a</artist> </music-library> The validator says: Element <artist> is not allowed under element <music-library>. Reason: The following elements are expected at this location (see below) <artist> Error location: music-library / artist Details cvc-model-group: Element <artist> unexpected by type '{anonymous}' of element <music-library>. cvc-elt.5.2.1: The element <music-library> is not valid with respect to the actual type definition '{anonymous}'.

    Read the article

  • Porting Oracle Procedure to PostgreSQL

    - by Grasper
    I am porting an Oracle function into Postgres PGPLSQL.. I have been using this guide: http://www.postgresql.org/docs/8.1/static/plpgsql.html CREATE OR REPLACE PROCEDURE DATA_UPDATE (mission NUMBER, task NUMBER) AS BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT NVL(SUM(RR.TRQ_FUEL_OFFLOAD),0), NVL(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF ; COMMIT ; END ; I've got it this far: CREATE OR REPLACE FUNCTION DATA_UPDATE (NUMERIC, NUMERIC) RETURNS integer as ' DECLARE mission ALIAS for $1; task ALIAS for $2; BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COALESCE(SUM(RR.TRQ_FUEL_OFFLOAD),0), COALESCE(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF; COMMIT; END; ' LANGUAGE plpgsql; This is the error I get: ERROR: syntax error at or near "SELECT" LINE 1: ...OTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COA... I do not know why this isn't working... any ideas?

    Read the article

  • Multiplication algorithm for abritrary precision (bignum) integers.

    - by nn
    Hi, I'm writing a small bignum library for a homework project. I am to implement Karatsuba multiplication, but before that I would like to write a naive multiplication routine. I'm following a guide written by Paul Zimmerman titled "Modern Computer Arithmetic" which is freely available online. On page 4, there is a description of an algorithm titled BasecaseMultiply which performs gradeschool multiplication. I understand step 2, 3, where B^j is a digit shift of 1, j times. But I don't understand step 1 and 3, where we have A*b_j. How is this multiplication meant to be carried out if the bignum multiplication hasn't been defined yet? Would the operation "*" in this algorithm just be the repeated addition method? Here is the parts I have written thus far. I have unit tested them so they appear to be correct for the most part: The structure I use for my bignum is as follows: #define BIGNUM_DIGITS 2048 typedef uint32_t u_hw; // halfword typedef uint64_t u_w; // word typedef struct { unsigned int sign; // 0 or 1 unsigned int n_digits; u_hw digits[BIGNUM_DIGITS]; } bn; Currently available routines: bn *bn_add(bn *a, bn *b); // returns a+b as a newly allocated bn void bn_lshift(bn *b, int d); // shifts d digits to the left, retains sign int bn_cmp(bn *a, bn *b); // returns 1 if a>b, 0 if a=b, -1 if a<b

    Read the article

  • Embedded SQL in OO languages like Java

    - by Steve De Caux
    One of the things that annoys me working with SQL in OO languages is having to define SQL statements in strings. When I used to work on IBM mainframes, the languages used an SQL preprocessor to parse SQL statements out of the native code, so the statements could be written in cleartext SQL without the obfuscation of strings, for instance in Cobol there is a EXEC SQL .... END-EXEC syntax construct that allows pure SQL statements to be embedded in the Cobol code. <pure cobol code, including assignment of value to local variable HOSTVARIABLE> EXEC SQL SELECT COL_A, COL_B, COL_C INTO :COLA, :COLB, :COLC FROM TAB_A WHERE COL_D = :HOSTVARIABLE END_EXEC <more cobol code, variables COLA, COLB, COLC have been set> ...this makes the SQL statement really easy to read & check for errors. Between the EXEC SQL .... END-EXEC tokens there are no constraints on indentation, linebreaking etc., so you can format the SQL statement according to taste. Note that this example is for a single-row select, when a multiple-row resultset is expected, the coding is different (but still v. easy to read). So, taking Java as an example What made the "old COBOL" approach undesirable ? Not only SQL, but system calls could be made much more readable with that approach. Let's call it the embedded foreign language preprocessor approach. Would an embedded foreign language preprocessor for SQL be useful to implement ? Would you see a benefit in being able to write native SQL statements inside java code ? Edit I'm really asking if you think SQL in OO languages is a throwback, and if not then what could be done to make it better.

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • WPF - How to properly reference a class from XAML

    - by Andy T
    OK, this is a super super noob question, one that I'm almost embarrassed to ask... I want to reference a class in my XAML file. It's a DataTemplateSelector for selecting the right edit template for a DataGrid column. Anyway, I've written the class into my code behind, added the local namespace to the top of top of the XAML, but when I try to reference the class from the XAML, it tells me the class does not exist in the local namespace. I must be missing something really really simple but I just can't understand it... Here's my code. XAML: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:tk="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:CustomFields" xmlns:col="clr-namespace:System.Collections;assembly=mscorlib" xmlns:sys="clr-namespace:System;assembly=mscorlib" x:Class="CustomFields.MainWindow" x:Name="Window" Title="Define Custom Fields" Width="425" Height="400" MinWidth="425" MinHeight="400"> <Window.Resources> <ResourceDictionary> <local:RangeValuesEditTemplateSelector> blah blah blah... </local:RangeValuesEditTemplateSelector> </ResourceDictionary> </Window.Resources> C#: namespace CustomFields { /// /// Interaction logic for MainWindow.xaml /// public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); // Insert code required on object creation below this point. } } public class RangeValuesEditTemplateSelector : DataTemplateSelector { public RangeValuesEditTemplateSelector(){ MessageBox.Show("hello"); } } } Any ideas what I'm doing wrong? I thought this should be simple as 1-2-3... Thanks! AT

    Read the article

  • Finding distance to the closest point in a point cloud on an uniform grid

    - by erik
    I have a 3D grid of size AxBxC with equal distance, d, between the points in the grid. Given a number of points, what is the best way of finding the distance to the closest point for each grid point (Every grid point should contain the distance to the closest point in the point cloud) given the assumptions below? Assume that A, B and C are quite big in relation to d, giving a grid of maybe 500x500x500 and that there will be around 1 million points. Also assume that if the distance to the nearest point exceds a distance of D, we do not care about the nearest point distance, and it can safely be set to some large number (D is maybe 2 to 10 times d) Since there will be a great number of grid points and points to search from, a simple exhaustive: for each grid point: for each point: if distance between points < minDistance: minDistance = distance between points is not a good alternative. I was thinking of doing something along the lines of: create a container of size A*B*C where each element holds a container of points for each point: define indexX = round((point position x - grid min position x)/d) // same for y and z add the point to the correct index of the container for each grid point: search the container of that grid point and find the closest point if no points in container and D > 0.5d: search the 26 container indices nearest to the grid point for a closest point .. continue with next layer until a point is found or the distance to that layer is greater than D Basically: put the points in buckets and do a radial search outwards until a points is found for each grid point. Is this a good way of solving the problem, or are there better/faster ways? A solution which is good for parallelisation is preferred.

    Read the article

  • When does a WPF adorner layer first become available?

    - by aoven
    I'm trying to add an overlay effect to my UserControl and I know that's what adorners are used for in WPF. But I'm a bit confused about how they supposedly work. I figured that adorner layer is implicitly handled by WPF runtime, and as such, should always be available. But when I create an instance of my UserControl in code, there is no adorner layer there. The following code fails with exception: var view = new MyUserControl(); var target = view.GetAdornerTarget(); // This returns a specific UI control. var layer = AdornerLayer.GetAdornerLayer(target); if (layer == null) { throw new Exception("No adorner layer at the moment."); } Can someone please explain to me, how this is supposed to work? Do I need to place the UserControl instance into a top-level Window first? Or do I need to define the layer myself somehow? Digging through documentation got me nowhere. Thank you!

    Read the article

  • Do overlays/tooltips work correctly in Emacs for Windows?

    - by Cheeso
    I'm using Flymake on C# code, emacs v22.2.1 on Windows. The Flymake stuff has been working well for me. For those who don't know, you can read an overview of flymake, but the quick story is that flymake repeatedly builds the source file you are currently working on in the background, for the purpose of doing syntax checking. It then highlights the compiler warnings and erros in the current buffer. Flymake didn't work for C# initially, but I "monkey-patched it" and it works nicely now. If you edit C# in emacs, I highly recommend using flymake. The only problem I have is with the UI. Flymake highlights the errors and warnings nicely, and then inserts "overlays" with the full error or warning text. IF I hover the mouse pointer over the highlighted line in code, the overlay pops up. But as you can see, the overlay (tooltip) is truncated, and it doesn't display correctly. Flymake seems to be doing the right thing, it's the overlay part that seems broken., and overlay seems to do the right thing. It's the tooltip that is displayed incorrectly. Do overlays tooltips work correctly in emacs for Windows? Where do I look to fix this?

    Read the article

  • PJSIP Custom Registration Header (iOS)

    - by Daniel Redington
    I am attempting to setup SIP communication with an internal server (using the PJSIP library), however, this server requires a custom header field with a specified header value for the REGISTRATION call. For example's sake we'll call this required header "MyHeader". From what I've found, the pjsua_acc_add() function will add an account and register it to the server using a config struct. The parameter "reg_hdr_list" of the config struct has the description "The optional custom SIP headers to be put in the registration request." Which sounds like exactly what I need, however doesn't seem to have any effect on the call itself. Here's what I have so far: pjsua_acc_config cfg; pjsua_acc_config_default(&cfg); //...Some other config stuff related to the server... pjsip_hdr test; test.name = pj_str("MyHeader"); test.sname = pj_str("MyHdr"); test.type = PJSIP_H_OTHER; test.prev = cfg.reg_hdr_list.prev; test.next = cfg.reg_hdr_list.next; cfg.reg_hdr_list = test; pj_status_t status; status = pjsua_acc_add(&cfg, PJ_TRUE, &acc_id); From the server side, there are no extra header fields or anything. And the struct that is used to define the header (pjsua_hdr) has no "value" or equivalent field, so even if it did create the header, how does it set the value? Here's the refrence for the header list definition: Link And the reference for the header struct: Link Any help would be greatly appreciated!

    Read the article

  • Handling element collisions on importing/including XML schemas

    - by eggyal
    Given schema definitions that define the same element differently, can one import/include both definitions and reference them independently from within a third schema definition? For example, given: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <element name="message" type="boolean"/> </schema> and: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <element name="message" type="date"/> </schema> Can one construct the following: <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example:namespace"> <complexType name="booleanMessageType"> <xs:sequence> <!-- reference to first definition here --> </xs:sequence> </complexType> <complexType name="dateMessageType"> <xs:sequence> <!-- reference to second definition here --> </xs:sequence> </complexType> </schema>

    Read the article

  • What is the best way to parse python script file in C/C++ code

    - by alexpov
    I am embedding python in C/C++ program. What I am trying to do is to parse the python script file from the C/C++ program, break the file to "blocks" so that each "block" is an a valid command in python code. Each block I need to put into std::string. For example: #PythonScript.py import math print "Hello Python" i = 0; while (i < 10): print "i = " , i; i = i + 1; print "GoodBye Python" In this script are 5 different "blocks": the first one is "import math;" the second is "print "Hello Python;" the third is "i = 0;" and the fourth is while (i < 10):\n\tprint "i = " , i;\n\ti = i + 1; My knowledge in python is very basic and I am not familiar with the python code syntax. What is the best way to do this, is there any Python C/C++ API function that supports this? why i need it - for GUI purpose. My program , which is writen in C, uses python to make some calculations. I run from C code , using python C API , python script and what i need is a way to capture python's output in my program. I catch it and evrything is ok, the problem is when the script involves user input. What happens is that i capture python's output after the script is finished , therefore, when there is an input command in the script i get a black screen .... i need to get all the printings before the input command. The first solution i tried is to parss the script to valid commands and run each comand, one after the other , seperatly .... for this i need to pars the script and deside what is a command and what is not ... The question is : what is the best way to do this and if there is somthing that allready does ?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >