Search Results

Search found 32789 results on 1312 pages for 'object relational mapping'.

Page 485/1312 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • AJAX, PHP, XML, and cascading drop-down lists

    - by Dave Jarvis
    What PHP libraries would you recommend to implement the following: Three dependent drop-down lists Three XML data sources AJAX-based Essentially, I'd like to create an XML database and wire up a form that allows the user to select three different dependent parameters: User clicks Region User clicks District (filtered by Region) User clicks Station (filtered by District) Even though I would like to use PHP and XML, the general problem is: One XHTML form Three dependent, cascading drop-down lists Three flat files (no relational database) for the list data The solution must be efficient, simple, reliable, and cross-browser. What technologies would you recommend to solve the problem? Thank you!

    Read the article

  • Strategy Design Pattern -- *dynamic* !!!

    - by alexeypro
    My application will have different strategies for my objects. What's the best way of implementing that? I would really love the case when we can make strategy classes implementation dynamically loaded from, say, some relational database. Not sure how do that better, though. What's the best approach? Idea is that say we want to apply to object MyObj strategy Strategy123 then we just load from database by ID 123 the object, deserialize it, get the Strategy class, and use it with MyObj. The maintenance while sounds easier from the first look can be a pain in the long run if Strategy interfaces changes, etc. What can I do also? I want to find solution when I should be keeping Strategy classes in codebase -- just for the sake that I don't need code change and re-deployment of the application if my Strategy changes, or I add new strategy. Please advise!

    Read the article

  • What more a Business Service can do?

    - by Rajesh Sharma
    Business services can be accessed from outside the application via XAI inbound service, or from within the application via scripting, Java, or info zones. Below is an example to what you can do with a business service wrapping an info zone.   Generally, a business service is specific to a page service program which references a maintenance object, that means one business service = one service program = one maintenance object. There have been quite a few threads in the forum around this topic where the business service is misconstrued to perform services only on a single object, for e.g. only for CILCSVAP - SA Page Maintenance, CILCPRMP - Premise Page Maintenance, CILCACCP - Account Page Maintenance, etc.   So what do you do when you want to retrieve some "non-persistent" field or information associated with some object/entity? Consider few business requirements: ·         Retrieve all the field activities associated to an account. ·         Retrieve the last bill date for an account. ·         Retrieve next bill date for an account.   It can be as simple as described below, for this post, we'll use the first scenario - Retrieve all the field activities associated to an account. To achieve this we'll have to do the following:   Step 1: Define an info zone   (A basic Zone of type F1-DE-SINGLE - Info Data Explorer - Single SQL has been used; you can use F1-DE - Info Data Explorer - Multiple SQLs for more complex scenarios)   Parameter Description Value To Enter User Filter 1 F1 Initial Display Columns C1 C2 C3 SQL Condition F1 SQL Statement SELECT     FA_ID, FA_STATUS_FLG, CRE_DTTM FROM     CI_FA WHERE     SP_ID IN         (SELECT SP_ID         FROM CI_SA_SP         WHERE             SA_ID IN                 (SELECT SA_ID                  FROM CI_SA                  WHERE                     ACCT_ID = :F1)) Column 1 source=SQLCOL sqlcol=FA_ID Column 2 source=SQLCOL sqlcol=FA_STATUS_FLG Column 3 type=TIME source=SQLCOL sqlcol=CRE_DTTM order=DESC   Note: Zone code specified was 'CM_ACCTFA'   Step 2: Define a business service Create a business service linked to 'Service Name' FWLZDEXP - Data Explorer. Schema will look like this:   <schema> <zoneCd mapField="ZONE_CD" default="CM_ACCTFA"/>      <accountId mapField="F1_VALUE"/>      <rowCount mapField="ROW_CNT"/>      <result type="group">         <selectList type="list" mapList="DE">             <faId mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="1"/>                 </row>             </faId>              <status mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="2"/>                 </row>             </status>              <createdDateTime mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="3"/>                 </row>             </createdDateTime>         </selectList>     </result> </schema>      What's next? As mentioned above, you can invoke this business service from an outside application via XAI inbound service or call this business service from within a script.   Step 3: Create a XAI inbound service for above created business service         Step 4: Test the inbound service   Go to XAI Submission and test the newly created service   <RXS_AccountFA>       <accountId>5922116763</accountId> </RXS_AccountFA>  

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • When to use CouchDB vs RDBMS

    - by Andrew Whitehouse
    I am looking at CouchDB, which has a number of appealing features over relational databases including: intuitive REST/HTTP interface easy replication data stored as documents, rather than normalised tables I appreciate that this is not a mature product so should be adopted with caution, but am wondering whether it is actually a viable replacement for an RDBMS (in spite of the intro page saying otherwise - http://couchdb.apache.org/docs/intro.html). Under what circumstances would CouchDB be a better choice of database than an RDBMS (e.g. MySQL), e.g. in terms of scalability, design + development time, reliability and maintenance. Are there still cases where an RDBMS is still clearly the right choice? Is this an either-or choice, or is a hybrid solution more likely to emerge as best practice?

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • Video learning for database design

    - by donpal
    I'm trying to learn good relational database design (using mysql and php if that makes any difference). I've already done some database work, so I'm not totally clueless, but I suspect that my solutions may not have adhered to best practices for efficient searching, optimization, etc. Can someone suggest a good set of videos on the topic? If you know something is superb or has really made a difference in your own learning, please post your suggestion. Prefer videos, but books (as long as they're not too huge) are ok too. But prefer videos. Thank you

    Read the article

  • Is adding indexes to a SQL Server ever a bad idea?

    - by Aerik
    We have a mid-size SQL Server based application that has no indexes defined. Not even on the the identity columns. I suggested to our moderately expensive application consultant that perhaps we might get better performance (particularly as our database grows) by creating some indexes on appropriate fields, and he said: "Indexes will significantly impact other areas of the application and customers should not create them under any circumstances." Anybody ever heard of anything like this? Are there ever circumstances where one should not create any indexes? I can see nothing special about this app - it's got int identity columns, then lots of string columns, bunch of relational tables but nothing special or weird that I can see. Thanks!

    Read the article

  • create new inbox folder and save emails

    - by kasunmit
    i am trying http://www.c-sharpcorner.com/uploadfile/rambab/outlookintegration10282006032802am/outlookintegration.aspx[^] this code for create inbox personal folder and save same mails at the datagrid view (outlook 2007 and vsto 2008) i am able to create inbox folder according to above example but couldn't wire code for save e-mails at that example to save contect they r using following code if (chkVerify.Checked) { OutLook._Application outlookObj = new OutLook.Application(); MyContact cntact = new MyContact(); cntact.CustomProperty = txtProp1.Text.Trim().ToString(); //CREATING CONTACT ITEM OBJECT AND FINDING THE CONTACT ITEM OutLook.ContactItem newContact = (OutLook.ContactItem)FindContactItem(cntact, CustomFolder); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . if (newContact != null) { newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if(string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } else { //IF THE CONTACT DOES NOT EXIST WITH SAME CUSTOM PROPERTY CREATES THE CONTACT. newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { OutLook._Application outlookObj = new OutLook.Application(); OutLook.ContactItem newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { //CREATES THE OUTLOOK CONTACT IN DEFAULT CONTACTS FOLDER. OutLook._Application outlookObj = new OutLook.Application(); OutLook.MAPIFolder fldContacts = (OutLook.MAPIFolder)outlookObj.Session.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderContacts); OutLook.ContactItem newContact = (OutLook.ContactItem)fldContacts.Items.Add(OutLook.OlItemType.olContactItem); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); newContact.Save(); this.Close(); } } /// /// ENABLING AND DISABLING THE CUSTOM FOLDER AND PROPERY OPTIONS. /// /// /// private void rdoCustom_CheckedChanged(object sender, EventArgs e) { if (rdoCustom.Checked) { txFolder.Enabled = true; chkAdd.Enabled = true; chkVerify.Enabled = true; txtProp1.Enabled = true; } else { txFolder.Enabled = false; chkAdd.Enabled = false; chkVerify.Enabled = false; txtProp1.Enabled = false; } } i don t have idea to convert it to save e-mails in the datagrid view the data gride view i am mentioning here is containing details (sender address, subject etc.) of unread mails and the i i am did was perform some filter for that mails as follows string senderMailAddress = txtMailAddress.Text.ToLower(); List list = (List)dgvUnreadMails.DataSource; List myUnreadMailList; List filteredList = (List)(from ci in list where ci.SenderAddress.StartsWith(senderMailAddress) select ci).ToList(); dgvUnreadMails.DataSource = filteredList; it was done successfully then i need to save those filtered e-mails to that personal inbox folder i created already for that pls give me some help my issue is that how can i assign outlook object just like they assign it to contacts (name, address, e-mail etc.) because in the e-mails we couldn't find it ..

    Read the article

  • Scaffolding Web Services in Grails

    - by Dan
    I need to implement a web app, but instead of using relational database I need to use different SOAP Web Services as a back-end. An important part of application only calls web services and displays the result. Since Web Services are clearly defined in form of Operation: In parameters and Return Type it seems to me that basic GUI could be easily constructed just like in the case of scaffolding based on Domain Entities. For example in case of SearchProducts web service operation I need to enter search parameters as input, so the search page can be constructed. Operation will return a list of products, so I need a page that will display this list in some kind of table. Is there already some library in grails that let you achieve this. If not, how would you go about creating one?

    Read the article

  • Cloned Cached item has memory issues

    - by ioWint
    Hi there, we are having values stored in Cache-Enterprise library Caching Block. The accessors of the cached item, which is a List, modify the values. We didnt want the Cached Items to get affected. Hence first we returned a new List(IEnumerator of the CachedItem) This made sure accessors adding and removing items had little effect on the original Cached item. But we found, all the instances of the List we returned to accessors were ALIVE! Object relational Graph showed a relationship between this list and the EnterpriseLibrary.CacheItem. So we changed the return to be a newly cloned List. For this we used a LINQ say (from item in Data select new DataClass(item) ).ToList() even when you do as above, the ORG shows there is a relationship between this list and the CacheItem. Cant we do anything to create a CLONE of the List item which is present in the Enterprise library cache, which DOESNT have ANY relationship with CACHE?!

    Read the article

  • Core Data vs SQLite 3

    - by Jason Medeiros
    I am already quite familiar with relational databases and have used SQLite (and other databases) in the past. However, Core Data has a certain allure, so I am considering spending some time to learn it for use in my next application. Is there much benefit to using Core Data over SQLite, or vice versa? What are the pros/cons of each? I find it hard to justify the cost of learning Core Data when Apple doesn't use it for many of its flagship applications like Mail.app or iPhoto.app - instead opting for SQLite databases. SQLite is also used extensively on the iPhone. Can those familiar with using both comment on their experience? Perhaps, as with most things, the question is deeper than just using one over the other?

    Read the article

  • Essential skills of a Data Scientist

    - by harshsinghal
    I would like to know more about the relevant skills in the arsenal of a Data Scientist, and with new technologies coming in every day, how one picks and chooses the essentials. A few ideas germane to this discussion: Knowing SQL and the use of a DB such as MySQL, PostgreSQL was great till the advent of NoSql and non-relational databases. MongoDB, CouchDB etc. are becoming popular to work with web-scale data. Knowing a stats tool like R is enough for analysis, but to create applications one may need to add Java, Python, and such others to the list. Data now comes in the form of text, urls, multi-media to name a few, and there are different paradigms associated with their manipulation. What about cluster computing, parallel computing, the cloud, Amazon EC2, Hadoop ? OLS Regression now has Artificial Neural Networks, Random Forests and other relatively exotic machine learning/data mining algos. for company Thoughts?

    Read the article

  • Is there a declarative language for data definitions?

    - by Jekke
    Reading about WPF and thinking about my application's data store at the same time led me to wonder if there are any languages or tools that allow you to define relational data in a declarative way? A shallow Google search suggests no such thing exists. Yet it seems so obviously useful. The kind of tool I have in mind would declaratively describe (at least) entities, relationships and views is a platform-agnostic way that would act as an abstraction layer between data-driven applications and their datastores. Does any such tool exist?

    Read the article

  • Conditional Join - join 1 tables 2 ways

    - by Jon H
    I have a set of (not very well normalised or relational) tables named PLAN, GROUP, PRODUCT CLIENT Most have linkage i.e. PLAN - CLIENT on clno GROUP to PRODUCT on PRODCD However, the linkage between PLAN and GROUP is tricky. A plan has 2 field of interest GRPNO and PRODCD. What I want to do is if GRPNO != 0 then join GROUP on GRPNO. However if GRPNO = 0 then I want to join GROUP on PRODCD. The frustrating thing is that the fileds I want to return in my queries are the same across the board I just need to be able to vary the join, or join the same table twice. The best I can come up with is 2 queries and merge them using datasets, or possibly using a union. Is there a nifty way to do this in one select? I should point out I am access Foxpro over ODBC to do this. Thank you!

    Read the article

  • Is there a production ready web application framework in Python?

    - by peperg
    I heard lots of good opinions about Python language. They say it's mature, expressive etc... Are there any production-ready web application frameworks in Python. By "production ready" I mean : supports objective-relational mapping with caching and declarative desciption (like JPA, Hibernate etc..) controls oriented user interface support - no HTML templates but something like JSF (RichFaces, Icefaces) or GWT, Vaadin, ZK component decomposition and dependency injection (like EJB or Spring) unit and integration testing good IDE support clustering, modularity etc (like Terracota, OSGi etc..) there are successful applications written in it by companies like IBM, Oracle etc (I mean real business applications not Twitter) could have commercial support Is it possible at all in Python world ? Or only choices are : use Python and write everything from the bottom (too expensice) stick to JEE buy .NET stack

    Read the article

  • Adding data (not only text) to a multi column ListView (WPF)

    - by user811804
    I am working on a WPF application in C# (.NET 4.0) where I have a ListView with a GridView that has two columns. I dynamically want to add rows (in code). My dilemma is that only the first column will have regular text added to it. The second column will have an object that includes a multi column Grid with TextBlocks. (see link http://imageshack.us/photo/my-images/803/listview.png/) If I do what you would normally do when you want to enter text in all columns (ie. DisplayMemberBinding) all I get in the second column is the text "System.Windows.Grid", which obviously isn't what I want. For reference if I just try to add the Grid object (with the TextBlocks) with the code listView1.Items.Add(grid1) (not using DisplayMemberBinding) the object gets added to the second column only (with the first column being blank) and not how it normally works with text where the same text ends up in all columns. I hope my question is detailed enough and any help with this would be much appreciated. EDIT: I have tried the following code, howeever every time I click the button to add a new row every single row gets updated with the same datatemplate. (ie. the second column always shows the same data on every row.) xaml: <Window x:Class="TEST.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Name="AAA" Title="MainWindow" Height="350" Width="525" Loaded="Window_Loaded"> <Grid Name="grid1"> <Grid.ColumnDefinitions> <ColumnDefinition Width="374*" /> <ColumnDefinition Width="129*" /> </Grid.ColumnDefinitions> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="21,12,0,0" Name="button1" VerticalAlignment="Top" Width="75" Grid.Column="1" Click="button1_Click" /> </Grid> code: public partial class MainWindow : Window { ListView listView1 = new ListView(); GridViewColumn viewCol2 = new GridViewColumn(); public MainWindow() { InitializeComponent(); Style style = new Style(typeof(ListViewItem)); style.Setters.Add(new Setter(ListViewItem.HorizontalContentAlignmentProperty, HorizontalAlignment.Stretch)); listView1.ItemContainerStyle = style; GridView gridView1 = new GridView(); listView1.View = gridView1; GridViewColumn viewCol1 = new GridViewColumn(); viewCol1.Header = "Option"; gridView1.Columns.Add(viewCol1); viewCol2.Header = "Value"; gridView1.Columns.Add(viewCol2); grid1.Children.Add(listView1); viewCol1.DisplayMemberBinding = new Binding("Option"); } private void Window_Loaded(object sender, RoutedEventArgs e) { } private void button1_Click(object sender, RoutedEventArgs e) { DataTemplate dataTemplate = new DataTemplate(); FrameworkElementFactory spFactory = new FrameworkElementFactory(typeof(Grid)); Random random = new Random(); int cols = random.Next(1, 6); int full = 100; for (int i = 0; i < cols; i++) { FrameworkElementFactory col1 = new FrameworkElementFactory(typeof(ColumnDefinition)); int partWidth = random.Next(0, full); full -= partWidth; col1.SetValue(ColumnDefinition.WidthProperty, new GridLength(partWidth, GridUnitType.Star)); spFactory.AppendChild(col1); } if (full > 0) { FrameworkElementFactory col1 = new FrameworkElementFactory(typeof(ColumnDefinition)); col1.SetValue(ColumnDefinition.WidthProperty, new GridLength(full, GridUnitType.Star)); spFactory.AppendChild(col1); } for (int i = 0; i < cols; i++) { FrameworkElementFactory text1 = new FrameworkElementFactory(typeof(TextBlock)); SolidColorBrush sb1 = new SolidColorBrush(); switch (i) { case 0: sb1.Color = Colors.Blue; break; case 1: sb1.Color = Colors.Red; break; case 2: sb1.Color = Colors.Yellow; break; case 3: sb1.Color = Colors.Green; break; case 4: sb1.Color = Colors.Purple; break; case 5: sb1.Color = Colors.Pink; break; case 6: sb1.Color = Colors.Brown; break; } text1.SetValue(TextBlock.BackgroundProperty, sb1); text1.SetValue(Grid.ColumnProperty, i); spFactory.AppendChild(text1); } if (full > 0) { FrameworkElementFactory text1 = new FrameworkElementFactory(typeof(TextBlock)); SolidColorBrush sb1 = new SolidColorBrush(Colors.Black); text1.SetValue(TextBlock.BackgroundProperty, sb1); text1.SetValue(Grid.ColumnProperty, cols); spFactory.AppendChild(text1); } dataTemplate.VisualTree = spFactory; viewCol2.CellTemplate = dataTemplate; int rows = listView1.Items.Count + 1; listView1.Items.Add(new { Option = "Row " + rows }); } }

    Read the article

  • Refresh page isnt working in asp.net using treeview

    - by Greg
    Hi, I am trying to refresh an asp.net page using this command: <meta http-equiv="Refresh" content="10"/> On that page I have 2 treeviews. The refresh works ok when I just open the page, but when I click on one of the treeviews and expand it, the refresh stopps working and the page isnt being refreshed. Any ideas why this can happen? Is there any connection to the treeview being expanded? Here is the full code of the page: public partial class Results : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } // Function that moves reviewed yellow card to reviewed tree protected void ycActiveTree_SelectedNodeChanged(object sender, EventArgs e) { ycActiveTree.SelectedNode.Text = "Move To Active"; ycReviewedTree.PopulateNodesFromClient = false; ycReviewedTree.Nodes[ycReviewedTree.Nodes.Count - 1].ChildNodes.Add(ycActiveTree.SelectedNode.Parent); Application["reviewedTree"] = new ArrayList(); int count = ((ArrayList)Application["activeTree"]).Count; // Move all the nodes from activeTree application to reviewedTree application for (int i = 0; Application["activeTree"] != null && i < count; i++) { ((ArrayList)Application["reviewedTree"]).Add(((ArrayList)Application["activeTree"])[i]); ((ArrayList)Application["activeTree"]).RemoveAt(0); } } protected void ycActiveTree_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (Application["idList"] != null && e.Node.Depth == 0) { string[] words = ((String)Application["idList"]).Split(' '); // Yellow Card details TreeNode child = new TreeNode(""); // Go over all the yellow card details and populate the treeview for (int i = 1; i < words.Length; i++) { child.SelectAction = TreeNodeSelectAction.None; // Same yellow card if (words[i] != "*") { // End of details and start of point ip's if (words[i] == "$") { // Add the yellow card node TreeNode yellowCardNode = new TreeNode(child.Text); yellowCardNode.SelectAction = TreeNodeSelectAction.Expand; e.Node.ChildNodes.Add(yellowCardNode); child.Text = ""; } // yellow card details else { child.Text = child.Text + words[i] + " "; } } // End of yellow card else { child.PopulateOnDemand = false; child.SelectAction = TreeNodeSelectAction.None; // Populate the yellow card node e.Node.ChildNodes[e.Node.ChildNodes.Count - 1].ChildNodes.Add(child); TreeNode moveChild = new TreeNode("Move To Reviewed"); moveChild.PopulateOnDemand = false; moveChild.SelectAction = TreeNodeSelectAction.Select; e.Node.ChildNodes[e.Node.ChildNodes.Count - 1].ChildNodes.Add(moveChild); child = new TreeNode(""); Application["activeTree"] = new ArrayList(); ((ArrayList)Application["activeTree"]).Add(e.Node.ChildNodes[e.Node.ChildNodes.Count - 1]); } } } // If there arent new yellow cards else if (Application["activeTree"] != null) { // Populate the active tree for (int i = 0; i < ((ArrayList)Application["activeTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["activeTree"])[i]); } } // If there were new yellow cards and nodes that moved from reviewed tree to active tree if (Application["idList"] != null && Application["activeTree"] != null && e.Node.ChildNodes.Count != ((ArrayList)Application["activeTree"]).Count) { for (int i = e.Node.ChildNodes.Count; i < ((ArrayList)Application["activeTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["activeTree"])[i]); } } // Nullify the yellow card id's Application["idList"] = null; } protected void ycReviewedTree_SelectedNodeChanged(object sender, EventArgs e) { ycActiveTree.PopulateNodesFromClient = false; ycReviewedTree.SelectedNode.Text = "Move To Reviewed"; ycActiveTree.Nodes[ycActiveTree.Nodes.Count - 1].ChildNodes.Add(ycReviewedTree.SelectedNode.Parent); int count = ((ArrayList)Application["reviewedTree"]).Count; // Move all the nodes from reviewedTree application to activeTree application for (int i = 0; Application["reviewedTree"] != null && i < count; i++) { ((ArrayList)Application["activeTree"]).Add(((ArrayList)Application["reviewedTree"])[i]); ((ArrayList)Application["reviewedTree"]).RemoveAt(0); } } protected void ycReviewedTree_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (Application["reviewedTree"] != null) { // Populate the reviewed tree for (int i = 0; i < ((ArrayList)Application["reviewedTree"]).Count; i++) { e.Node.ChildNodes.Add((TreeNode)((ArrayList)Application["reviewedTree"])[i]); } } } } Thanks, Greg

    Read the article

  • NoSql Crash Course/Tutorial

    - by Chris Thompson
    Hi all, I've seen NoSQL pop up quite a bit on SO and I have a solid understanding of why you would use it (from here, Wikipedia, etc). This could be due to the lack of concrete and uniform definition of what it is (more of a paradigm than concrete implementation), but I'm struggling to wrap my head around how I would go about designing a system that would use it or how I would implement it in my system. I'm really stuck in a relational-db mindset thinking of things in terms of tables and joins... At any rate, does anybody know of a crash course/tutorial on a system that would use it (kind of a "hello world" for a NoSQL-based system) or a tutorial that takes an existing "Hello World" app based on SQL and converts it to NoSQL (not necessarily in code, but just a high-level explanation). I see this having one solid answer, but if you guys feel like it should be community wiki, I'll be happy to change it. Thanks! Chris

    Read the article

  • Extracting Demographic and Contact Information from unstructured text files

    - by jn29098
    I am looking to extract specific items out of a large pool of unstructured documents. These documents could be 1-5 pages of text formatted in various ways by the user, but in most cases would contain at least: Name Address (physical) Email Address Phone number website URL I'm looking for a semantic parser that can attempt to extract these elements from the documents so that I can load that information into a relational database and work with these records as contacts. Other services I've looked for, while valuable for other purposes, do not address this specific need. Alchemy API Open Calais Saplo Any thoughts, suggestions or leads?

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • Silverlight IConvertible TypeConverter

    - by codingbloke
    I recently answered the following question on stackoverflow:  Silverlight 3 custom control: only ‘int’ as numeric type for a property? [e.g. long or int64 seems to break] I quickly knocked up the class ConvertibleTypeConverter<T> that I posted in the question (listed later here as well). Afterward I fully expected to find that of the usual clever “bods who blog” to have covered this probably with a better solution than I.  So far though I’ve not found one so I thought I’d blog it myself. The Problem Here is a classic gotcha I’ve seen asked more than once on stackoverflow :- public class MyClass {     public float SomeValue { get; set; } } <local:MyClass SomeValue="45.15" /> This fails with the error  “Failed to create a 'System.Single' from the text '45.15'”  and results in much premature hair loss.  Fortunately this is SL4, in SL3 the error message is almost meaningless.  So what gives, how can it be that this fails when we can see other very similar values parsing happily all over the place? It comes down the fact that the Xaml parser only handles a few of the primitive data types namely: bool, int, string and double.  Since the parser has no idea how to convert a string to a float we get the above error. The Solution The sensible solution is “use double not float” but lets not dwell on that, there has to be occasions where such an answer isn’t acceptable. In order to achieve parsing of other types we need an implementation of TypeConverter for the type of the property and then we need to use the TypeConverterAttribute to decorate the property .  As an example the Silverlight SDK provides one for DateTime the DateTimeTypeConverter (yes I know DateTime isn’t really a primitive). The following class will parse in Xaml:- public class MyClass {     [TypeConverter(typeof(DateTimeTypeConverter))]     public DateTime SomeValue {get; set; } } So far though we would need to create a TypeConverter for each primitive type we are using, what if I had the following mad class to support in Xaml:- public class StrangePrimitives {     public Boolean BooleanProp { get; set; }     public Byte ByteProp { get; set; }     public Char CharProp { get; set; }     public DateTime DateTimeProp { get; set; }     public Decimal DecimalProp { get; set; }     public Double DoubleProp { get; set; }     public Int16 Int16Prop { get; set; }     public Int32 Int32Prop { get; set; }     public Int64 Int64Prop { get; set; }     public SByte SByteProp { get; set; }     public Single SingleProp { get; set; }     public String StringProp { get; set; }     public UInt16 UInt16Prop { get; set; }     public UInt32 UInt32Prop { get; set; }     public UInt64 UInt64Prop { get; set; } } Then I want to fill an instance of StrangePrimitives with the following Xaml which of course fails. <local:StrangePrimitives x:Key="MyStrangePrimitives"                          BooleanProp="True"                          ByteProp="156"                          CharProp="A"                          DateTimeProp="06 Jun 2010"                          DecimalProp="123.56"                          DoubleProp="8372.937803"                          Int16Prop="16532"                          Int32Prop="73738248"                          Int64Prop="12345678909298"                          SByteProp="-123"                          SingleProp="39.0"                          StringProp="Hello, World!"                          UInt16Prop="40000"                          UInt32Prop="4294967295"                          UInt64Prop="18446744073709551615"      /> I got to thinking, though, one thing all these primitive types have in common is that they all implement IConvertible so it should be possible to write just one converter to handle them all.  Here it is:- The ConvertibleTypeConverter public class ConvertibleTypeConverter<T> : TypeConverter where T : IConvertible {     public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)     {         return sourceType.GetInterface("IConvertible", false) != null;     }     public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)     {         return destinationType.GetInterface("IConvertible", false) != null;     }     public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)     {         return ((IConvertible)value).ToType(typeof(T), culture);     }     public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)     {         return ((IConvertible)value).ToType(destinationType, culture);     } } I won’t bore you with an explanation of how it works, it simply adapts one existing interface (the IConvertible) and exposes it as another (the TypeConverter).   With that in place the previous strange primitives class can be modified as:- public class StrangePrimitives {     public Boolean BooleanProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Byte>))]     public Byte ByteProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Char>))]     public Char CharProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<DateTime>))]     public DateTime DateTimeProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Decimal>))]     public Decimal DecimalProp { get; set; }     public Double DoubleProp {get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Int16>))]     public Int16 Int16Prop { get; set; }     public Int32 Int32Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Int64>))]     public Int64 Int64Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<SByte>))]     public SByte SByteProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<Single>))]     public Single SingleProp { get; set; }     public String StringProp { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt16>))]     public UInt16 UInt16Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt32>))]     public UInt32 UInt32Prop { get; set; }     [TypeConverter(typeof(ConvertibleTypeConverter<UInt64>))]     public UInt64 UInt64Prop { get; set; } } This results in the previous Xaml parsing happily.  Now it seems such an obvious thing to do that one may wonder why such a class doesn’t already existing in Silverlight or at least in the SDK.   I would not be surprised if there were some very good reasons hence use the ConvertibleTypeConverter with caution.  It does seem to me to be a useful little class to have lying around in the toolbox for the odd occasion where it may be needed.

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >