Search Results

Search found 17068 results on 683 pages for 'merge array'.

Page 500/683 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • SQL Server 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but SQL Server does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • file doesn't open, running outside of debugger results in seg fault (c++)

    - by misterich
    Hello (and thanks in advance) I'm in a bit of a quandry, I cant seem to figure out why I'm seg faulting. A couple of notes: It's for a course -- and sadly I am required to use use C-strings instead of std::string. Please dont fix my code (I wont learn that way and I will keep bugging you). please just point out the flaws in my logic and suggest a different function/way. platform: gcc version 4.4.1 on Suse Linux 11.2 (2.6.31 kernel) Here's the code main.cpp: // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <cstdlib> /// EXIT_SUCCESS, EXIT_FAILURE #include <iostream> /// cin, cout, ifstream #include <cassert> /// assert // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header for the dictionary class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR CONSTANTS #define ENTER '\n' /// Used to accept new lines, quit program. #define SPACE ' ' /// One way to end the program // /////////////////////////////////////////////////////////////////////////////////// // CUSTOM DATA TYPES /// File Namespace -- keep it local namespace { /// Possible program prompts to display for the user. enum FNS_Prompts { fileName_, /// prints out the name of the file noFile_, /// no file was passed to the program tooMany_, /// more than one file was passed to the program noMemory_, /// Not enough memory to use the program usage_, /// how to use the program word_, /// ask the user to define a word. notFound_, /// the word is not in the dictionary done_, /// the program is closing normally }; } // /////////////////////////////////////////////////////////////////////////////////// // Namespace using namespace std; /// Nothing special in the way of namespaces // /////////////////////////////////////////////////////////////////////////////////// // FUNCTIONS /** prompt() prompts the user to do something, uses enum Prompts for parameter. */ void prompt(FNS_Prompts msg /** determines the prompt to use*/) { switch(msg) { case fileName_ : { cout << ENTER << ENTER << "The file name is: "; break; } case noFile_ : { cout << ENTER << ENTER << "...Sorry, a dictionary file is needed. Try again." << endl; break; } case tooMany_ : { cout << ENTER << ENTER << "...Sorry, you can only specify one dictionary file. Try again." << endl; break; } case noMemory_ : { cout << ENTER << ENTER << "...Sorry, there isn't enough memory available to run this program." << endl; break; } case usage_ : { cout << "USAGE:" << endl << " lookup.exe [dictionary file name]" << endl << endl; break; } case done_ : { cout << ENTER << ENTER << "like Master P says, \"Word.\"" << ENTER << endl; break; } case word_ : { cout << ENTER << ENTER << "Enter a word in the dictionary to get it's definition." << ENTER << "Enter \"?\" to get a sorted list of all words in the dictionary." << ENTER << "... Press the Enter key to quit the program: "; break; } case notFound_ : { cout << ENTER << ENTER << "...Sorry, that word is not in the dictionary." << endl; break; } default : { cout << ENTER << ENTER << "something passed an invalid enum to prompt(). " << endl; assert(false); /// something passed in an invalid enum } } } /** useDictionary() uses the dictionary created by createDictionary * - prompts user to lookup a word * - ends when the user enters an empty word */ void useDictionary(Dictionary &d) { char *userEntry = new char; /// user's input on the command line if( !userEntry ) // check the pointer to the heap { cout << ENTER << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } do { prompt(word_); // test code cout << endl << "----------------------------------------" << endl << "Enter something: "; cin.getline(userEntry, INPUT_LINE_MAX_LEN, ENTER); cout << ENTER << userEntry << endl; }while ( userEntry[0] != NIL && userEntry[0] != SPACE ); // GARBAGE COLLECTION delete[] userEntry; } /** Program Entry * Reads in the required, single file from the command prompt. * - If there is no file, state such and error out. * - If there is more than one file, state such and error out. * - If there is a single file: * - Create the database object * - Populate the database object * - Prompt the user for entry * main() will return EXIT_SUCCESS upon termination. */ int main(int argc, /// the number of files being passed into the program char *argv[] /// pointer to the filename being passed into tthe program ) { // EXECUTE /* Testing code * / char tempFile[INPUT_LINE_MAX_LEN] = {NIL}; cout << "enter filename: "; cin.getline(tempFile, INPUT_LINE_MAX_LEN, '\n'); */ // uncomment after successful debugging if(argc <= 1) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// no file was passed to the program } else if(argc > 2) { prompt(tooMany_); prompt(usage_); return EXIT_FAILURE; /// more than one file was passed to the program } else { prompt(fileName_); cout << argv[1]; // print out name of dictionary file if( !argv[1] ) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// file does not exist } /* file.open( argv[1] ); // open file numEntries >> in.getline(file); // determine number of dictionary objects to create file.close(); // close file Dictionary[ numEntries ](argv[1]); // create the dictionary object */ // TEMPORARY FILE FOR TESTING!!!! //Dictionary scrabble(tempFile); Dictionary scrabble(argv[1]); // creaate the dicitonary object //*/ useDictionary(scrabble); // prompt the user, use the dictionary } // exit return EXIT_SUCCESS; /// terminate program. } Dict.h/.cpp #ifndef DICT_H #define DICT_H // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (Custom header files) #include "entry.h" /// class for dictionary entries // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define INPUT_LINE_MAX_LEN 256 /// Maximum length of each line in the dictionary file class Dictionary { public : // // Do NOT modify the public section of this class // typedef void (*WordDefFunc)(const char *word, const char *definition); Dictionary( const char *filename ); ~Dictionary(); const char *lookupDefinition( const char *word ); void forEach( WordDefFunc func ); private : // // You get to provide the private members // // VARIABLES int m_numEntries; /// stores the number of entries in the dictionary Entry *m_DictEntry_ptr; /// points to an array of class Entry // Private Functions }; #endif ----------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <iostream> /// cout, getline #include <fstream> // ifstream #include <cstring> /// strchr // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header file required by assignment //#include "entry.h" /// Dicitonary Entry Class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define COMMA ',' /// Delimiter for file #define ENTER '\n' /// Carriage return character #define FILE_ERR_MSG "The data file could not be opened. Program will now terminate." #pragma warning(disable : 4996) /// turn off MS compiler warning about strcpy() // /////////////////////////////////////////////////////////////////////////////////// // Namespace reference using namespace std; // /////////////////////////////////////////////////////////////////////////////////// // PRIVATE MEMBER FUNCTIONS /** * Sorts the dictionary entries. */ /* static void sortDictionary(?) { // sort through the words using qsort } */ /** NO LONGER NEEDED?? * parses out the length of the first cell in a delimited cell * / int getWordLength(char *str /// string of data to parse ) { return strcspn(str, COMMA); } */ // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC MEMBER FUNCTIONS /** constructor for the class * - opens/reads in file * - creates initializes the array of member vars * - creates pointers to entry objects * - stores pointers to entry objects in member var * - ? sort now or later? */ Dictionary::Dictionary( const char *filename ) { // Create a filestream, open the file to be read in ifstream dataFile(filename, ios::in ); /* if( dataFile.fail() ) { cout << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } */ if( dataFile.is_open() ) { // read first line of data // TEST CODE in.getline(dataFile, INPUT_LINE_MAX_LEN) >> m_numEntries; // TEST CODE char temp[INPUT_LINE_MAX_LEN] = {NIL}; // TEST CODE dataFile.getline(temp,INPUT_LINE_MAX_LEN,'\n'); dataFile >> m_numEntries; /** Number of terms in the dictionary file * \todo find out how many lines in the file, subtract one, ingore first line */ //create the array of entries m_DictEntry_ptr = new Entry[m_numEntries]; // check for valid memory allocation if( !m_DictEntry_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // loop thru each line of the file, parsing words/def's and populating entry objects for(int EntryIdx = 0; EntryIdx < m_numEntries; ++EntryIdx) { // VARIABLES char *tempW_ptr; /// points to a temporary word char *tempD_ptr; /// points to a temporary def char *w_ptr; /// points to the word in the Entry object char *d_ptr; /// points to the definition in the Entry int tempWLen; /// length of the temp word string int tempDLen; /// length of the temp def string char tempLine[INPUT_LINE_MAX_LEN] = {NIL}; /// stores a single line from the file // EXECUTE // getline(dataFile, tempLine) // get a "word,def" line from the file dataFile.getline(tempLine, INPUT_LINE_MAX_LEN); // get a "word,def" line from the file // Parse the string tempW_ptr = tempLine; // point the temp word pointer at the first char in the line tempD_ptr = strchr(tempLine, COMMA); // point the def pointer at the comma *tempD_ptr = NIL; // replace the comma with a NIL ++tempD_ptr; // increment the temp def pointer // find the string lengths... +1 to account for terminator tempWLen = strlen(tempW_ptr) + 1; tempDLen = strlen(tempD_ptr) + 1; // Allocate heap memory for the term and defnition w_ptr = new char[ tempWLen ]; d_ptr = new char[ tempDLen ]; // check memory allocation if( !w_ptr && !d_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // copy the temp word, def into the newly allocated memory and terminate the strings strcpy(w_ptr,tempW_ptr); w_ptr[tempWLen] = NIL; strcpy(d_ptr,tempD_ptr); d_ptr[tempDLen] = NIL; // set the pointers for the entry objects m_DictEntry_ptr[ EntryIdx ].setWordPtr(w_ptr); m_DictEntry_ptr[ EntryIdx ].setDefPtr(d_ptr); } // close the file dataFile.close(); } else { cout << ENTER << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } } /** * cleans up dynamic memory */ Dictionary::~Dictionary() { delete[] m_DictEntry_ptr; /// thou shalt not have memory leaks. } /** * Looks up definition */ /* const char *lookupDefinition( const char *word ) { // print out the word ---- definition } */ /** * prints out the entire dictionary in sorted order */ /* void forEach( WordDefFunc func ) { // to sort before or now.... that is the question } */ Entry.h/cpp #ifndef ENTRY_H #define ENTRY_H // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C++ Std lib) #include <cstdlib> /// EXIT_SUCCESS, NULL // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define NIL '\0' /// C-String terminator #define MEM_ERR_MSG "Memory allocation has failed. Program will now terminate." // /////////////////////////////////////////////////////////////////////////////////// // CLASS DEFINITION class Entry { public: Entry(void) : m_word_ptr(NULL), m_def_ptr(NULL) { /* default constructor */ }; void setWordPtr(char *w_ptr); /// sets the pointer to the word - only if the pointer is empty void setDefPtr(char *d_ptr); /// sets the ponter to the definition - only if the pointer is empty /// returns what is pointed to by the word pointer char getWord(void) const { return *m_word_ptr; } /// returns what is pointed to by the definition pointer char getDef(void) const { return *m_def_ptr; } private: char *m_word_ptr; /** points to a dictionary word */ char *m_def_ptr; /** points to a dictionary definition */ }; #endif -------------------------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "entry.h" /// class header file // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC FUNCTIONS /* * only change the word member var if it is in its initial state */ void Entry::setWordPtr(char *w_ptr) { if(m_word_ptr == NULL) { m_word_ptr = w_ptr; } } /* * only change the def member var if it is in its initial state */ void Entry::setDefPtr(char *d_ptr) { if(m_def_ptr == NULL) { m_word_ptr = d_ptr; } }

    Read the article

  • C# Windows Service Multiple Config Files

    - by Goober
    Quick Question Is it possible to have more than 1 config file in a windows service? Or is there some way I can merge them at run time? Scenario Currently I have two config files containing the below contents. After I build and install my Windows Service, I can't get my custom XML Parser class to read the content because it keeps pointing to the wrong config file, even though I am using a few lines of code to ensure it gets the right name of the config file I need to access. ALSO When I navigate to the folder in which the service is installed, there is only sign of the normal APP.Config file, and no sign of the custom config file. (I have even set the normal ones properties to "Do Not Copy" and the custom ones properties to "Copy Always"). Config File Determination Code string settingsFile = String.Format("{0}.exe.config", System.AppDomain.CurrentDomain.BaseDirectory + Assembly.GetExecutingAssembly().GetName().Name); CUSTOM CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <servers> <SV066930> <add name="Name" value = "SV066930" /> <processes> <SimonTest1> <add name="ProcessName" value="notepad.exe" /> <add name="CommandLine" value="C:\\WINDOWS\\system32\\notepad.exe C:\\WINDOWS\\Profiles\\TA2TOF1\\Desktop\\SimonTest1.txt" /> </SimonTest1> </processes> </SV066930> </servers> </configuration> NORMAL APP.CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx" /> </configSections> <connectionStrings> <add name="DB" connectionString="Data Source=etc......" /> </connectionStrings> </configuration>

    Read the article

  • My winform application doesn't work on others' pc without vs 2010 installed

    - by wings
    Just like I said, my winform application works properly on computers with VS installed, but on other computers, it will crash due to a FileNotFound Exception. I used using Application = Microsoft.Office.Interop.Excel.Application; in my source code to generate a Excel file, and the problem occurs as soon as the Excel-related function is called. But I don't know what it refers to exactly. Do I have to get some .dll included along with the .exe file? And what DLL is that? Below are part of my codes: private void FileExport(object objTable) { StartWaiting(); string[,] table = null; try { table = (string[,])objTable; } catch (Exception ex) { ShowStatus(ex.Message, StatusType.Warning); } if (table == null) { return; } Application excelApp = new Application { DisplayAlerts = false }; Workbooks workbooks = excelApp.Workbooks; Workbook workbook = workbooks.Add(XlWBATemplate.xlWBATWorksheet); Worksheet worksheet = (Worksheet)workbook.Worksheets[1]; worksheet.Name = "TABLE"; for (int i = 0; i < table.GetLength(0); i++) { for (int j = 0; j < table.GetLength(1); j++) { worksheet.Cells[i + 1, j + 1] = table[i, j]; } } Range range = excelApp.Range["A1", "H1"]; range.Merge(); range.Font.Bold = true; range.Font.Size = 15; range.RowHeight = 50; range.EntireRow.AutoFit(); range = excelApp.Range["A2", "H8"]; range.Font.Size = 11; range = excelApp.Range["A1", "H8"]; range.NumberFormatLocal = "@"; range.RowHeight = 300; range.ColumnWidth = 50; range.HorizontalAlignment = XlHAlign.xlHAlignCenter; range.VerticalAlignment = XlVAlign.xlVAlignCenter; range.EntireRow.AutoFit(); range.EntireColumn.AutoFit(); worksheet.UsedRange.Borders.LineStyle = 1; Invoke(new MainThreadInvokerDelegate(SaveAs), new object[] { worksheet, workbook, excelApp } ); EndWaiting(); }

    Read the article

  • How come Java doesn't accept my LinkedList in a Generic, but accepts its own?

    - by master chief
    For a class assignment, we can't use any of the languages bultin types, so I'm stuck with my own list. Anyway, here's the situation: public class CrazyStructure <T extends Comparable<? super T>> { MyLinkedList<MyTree<T>> trees; //error: type parameter MyTree is not within its bound } However: public class CrazyStructure <T extends Comparable<? super T>> { LinkedList<MyTree<T>> trees; } Works. MyTree impleements the Comparable interface, but MyLinkedList doesn't. However, Java's LinkedList doesn't implement it either, according to this. So what's the problem and how do I fix it? MyLinkedList: public class MyLinkedList<T extends Comparable<? super T>> { private class Node<T> { private Node<T> next; private T data; protected Node(); protected Node(final T value); } Node<T> firstNode; public MyLinkedList(); public MyLinkedList(T value); //calls node1.value.compareTo(node2.value) private int compareElements(final Node<T> node1, final Node<T> node2); public void insert(T value); public void remove(T value); } MyTree: public class LeftistTree<T extends Comparable<? super T>> implements Comparable { private class Node<T> { private Node<T> left, right; private T data; private int dist; protected Node(); protected Node(final T value); } private Node<T> root; public LeftistTree(); public LeftistTree(final T value); public Node getRoot(); //calls node1.value.compareTo(node2.value) private int compareElements(final Node node1, final Node node2); private Node<T> merge(Node node1, Node node2); public void insert(final T value); public T extractMin(); public int compareTo(final Object param); }

    Read the article

  • CLR: Multi Param Aggregate, Argument not in Final Output?

    - by OMG Ponies
    Why is my delimiter not appearing in the final output? It's initialized to be a comma, but I only get ~5 white spaces between each attribute using: SELECT [article_id] , dbo.GROUP_CONCAT(0, t.tag_name, ',') AS col FROM [AdventureWorks].[dbo].[ARTICLE_TAG_XREF] atx JOIN [AdventureWorks].[dbo].[TAGS] t ON t.tag_id = atx.tag_id GROUP BY article_id The bit for DISTINCT works fine, but it operates within the Accumulate scope... Output: article_id | col ------------------------------------------------- 1 | a a b c I only have rudimentary C# API knowledge... C# Code: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Xml.Serialization; using System.Xml; using System.IO; using System.Collections; using System.Text; [Serializable] [SqlUserDefinedAggregate(Format.UserDefined, MaxByteSize = 8000)] public struct GROUP_CONCAT : IBinarySerialize { ArrayList list; string delimiter; public void Init() { list = new ArrayList(); delimiter = ","; } public void Accumulate(SqlBoolean isDistinct, SqlString Value, SqlString separator) { delimiter = (separator.IsNull) ? "," : separator.Value ; if (!Value.IsNull) { if (isDistinct) { if (!list.Contains(Value.Value)) { list.Add(Value.Value); } } else { list.Add(Value.Value); } } } public void Merge(GROUP_CONCAT Group) { list.AddRange(Group.list); } public SqlString Terminate() { string[] strings = new string[list.Count]; for (int i = 0; i < list.Count; i++) { strings[i] = list[i].ToString(); } return new SqlString(string.Join(delimiter, strings)); } #region IBinarySerialize Members public void Read(BinaryReader r) { int itemCount = r.ReadInt32(); list = new ArrayList(itemCount); for (int i = 0; i < itemCount; i++) { this.list.Add(r.ReadString()); } } public void Write(BinaryWriter w) { w.Write(list.Count); foreach (string s in list) { w.Write(s); } } #endregion }

    Read the article

  • 'Timeout Expired' error against local SQL Express on only 2 LINQ Methods...

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • Spanning columns in HTML table.

    - by Tony
    I'm trying do to a very simple operation of merging two columns in a table. This seems easy with the colspan, but if I merge different columns without leaving at least one row without any merged columns, the sizing gets completely messed up. Please see the following example at http://www.allthingsdope.com/table.html or take a look at and try the following code: Good: <table width="700px"> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td width="200px">3: 200px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td colspan=2 width="500px" >2 & 3: 500px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td colspan=2 width="300px">3 & 4: 300px</td> </tr> </table> Bad: <table width="700px"> <tr> <th width="100px">1: 100px</th> <td colspan=2 width="500px" >2 & 3: 500px</td> <td width="100px">4: 100px</td> </tr> <tr> <th width="100px">1: 100px</th> <td width="300px">2: 300px</td> <td colspan=2 width="300px">3 & 4: 300px</td> </tr> </table> This seems so simple but I can not figure it out!

    Read the article

  • Delete duplicate rows, do not preserve one row

    - by Radley
    I need a query that goes through each entry in a database, checks if a single value is duplicated elsewhere in the database, and if it is - deletes both entries (or all, if more than two). Problem is the entries are URLs, up to 255 characters, with no way of identifying the row. Some existing answers on Stackoverflow do not work for me due to performance limitations, or they use uniqueid which obviously won't work when dealing with a string. Long Version: I have two databases containing URLs (and only URLs). One database has around 3,000 urls and the other around 1,000. However, a large majority of the 1,000 urls were taken from the 3,000 url database. I need to merge the 1,000 into the 3,000 as new entries only. For this, I made a third database with combined URLs from both tables, about 4,000 entries. I need to find all duplicate entries in this database and delete them (Both of them, without leaving either). I have followed the query of a few examples on this site, but whenever I try to delete both entries it ends up deleting all the entries, or giving sql errors. Alternatively: I have two databases, each containing the separate database. I need to check each row from one database against the other to find any that aren't duplicates, and then add those to a third database. Edit: I've got my own PHP solution which is pretty hacky, but works. I cannot answer my own question for 8 hours because I'm new, so here it is for now: I went with a PHP script to accomplish this, as I'm more familiar with PHP than MySQL. This generates a simple list of urls that only exist in the target database, but not both. If you have more than 7,000 entries to parse this may take awhile, and you will need to copy/paste the results into a text file or expand the script to store them back into a database. I'm just doing it manually to save time. Note: Uses MeekroDB <pre> <?php require('meekrodb.2.1.class.php'); DB::$user = 'root'; DB::$password = ''; DB::$dbName = 'testdb'; $all = DB::query('SELECT * FROM old_urls LIMIT 7000'); foreach($all as $row) { $test = DB::query('SELECT url FROM new_urls WHERE url=%s', $row['url']); if (!is_array($test)) { echo $row['url'] . "\n"; }else{ if (count($test) == 0) { echo $row['url'] . "\n"; } } } ?> </pre>

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Can't send FB Notifications with new API

    - by Zak Khalique
    Really need some help with this one! I'm trying to send notifications from a canvas app using the new notifications API but I keep getting the following exception: OAuthException: (#200) Only web canvas apps can send app notifications However, the app IS loaded in the Facebook canvas -- I'm making an ajax call to my server when the user takes a particular action which triggers the notification POST request. The user has also authorized the app. This is the code I'm using: $graphUrl = $user_id . "/notifications"; $params = array( "access_token" => $admintoken, "href" => $link, "template" => "string of text < 180 chars" ); try { $result = $facebook->_graph($graphUrl, 'POST', $params); } catch (Exception $e){ echo $e; }

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What's the correct way to POST a compressed JSON string with RestSharp?

    - by Steve Dunn
    I want to use RestSharp to POST something somewhere. I'm posting straight JSON (and not POCOs). Because I'm posting plain JSON, I believe I need to use this workaround instead of setting Body: request.AddParameter( "application/json", myJsonString, ParameterType.RequestBody); This works fine when I'm not compressing the JSON. When I do, using this: request.Headers.Add("Content-Encoding", "gzip"); request.AddParameter( "application/json", GZipStream.CompressString(myJsonString), ParameterType.RequestBody); This doesn't work. I stepped through the code and in RestClient::ConfigureHttp, I see: http.RequestBody = body.Value.ToString(); Since I'm giving at a byte array, body.Value is set to System.Byte[] Is there a way for RestSharp to handle a gzipped json string in a POST request?

    Read the article

  • How to parse the XML file in RapidXML.

    - by user337891
    I have to parse a xml file in C++. I was researching and found rapidxml library for this. I have doubt about "doc.parse<0(xml)" can xml be .xml file or it needs to be a string or char *? If can take only string or char * then I guess I need to read the whole file and store it in a char array and pass the pointer of it to the function? Is there a way to directly use file because I would need to change the xml file inside the code also. If it is not possible in RapidXML then please suggest some other xml libraries in C++. Thanks!!! Ashd

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • a php script to get lat/long from google maps using CellID

    - by user296516
    Hi guys, I have found this script that supposedly connects to google maps and gets lat/long coordinates, based on CID/LAC/MNC/MCC info. But I can't really make it work... where do I input CID/LAC/MNC/MCC ? <?php $data = "\x00\x0e". // Function Code? "\x00\x00\x00\x00\x00\x00\x00\x00". //Session ID? "\x00\x00". // Contry Code "\x00\x00". // Client descriptor "\x00\x00". // Version "\x1b". // Op Code? "\x00\x00\x00\x00". // MNC "\x00\x00\x00\x00". // MCC "\x00\x00\x00\x03". "\x00\x00". "\x00\x00\x00\x00". //CID "\x00\x00\x00\x00". //LAC "\x00\x00\x00\x00". //MNC "\x00\x00\x00\x00". //MCC "\xff\xff\xff\xff". // ?? "\x00\x00\x00\x00" // Rx Level? ; if ($_REQUEST["myl"] != "") { $temp = split(":", $_REQUEST["myl"]); $mcc = substr("00000000".dechex($temp[0]),-8); $mnc = substr("00000000".dechex($temp[1]),-8); $lac = substr("00000000".dechex($temp[2]),-8); $cid = substr("00000000".dechex($temp[3]),-8); } else { $mcc = substr("00000000".$_REQUEST["mcc"],-8); $mnc = substr("00000000".$_REQUEST["mnc"],-8); $lac = substr("00000000".$_REQUEST["lac"],-8); $cid = substr("00000000".$_REQUEST["cid"],-8); } $init_pos = strlen($data); $data[$init_pos - 38]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 37]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 36]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 35]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 34]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 33]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 32]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 31]= pack("H*",substr($mcc,6,2)); $data[$init_pos - 24]= pack("H*",substr($cid,0,2)); $data[$init_pos - 23]= pack("H*",substr($cid,2,2)); $data[$init_pos - 22]= pack("H*",substr($cid,4,2)); $data[$init_pos - 21]= pack("H*",substr($cid,6,2)); $data[$init_pos - 20]= pack("H*",substr($lac,0,2)); $data[$init_pos - 19]= pack("H*",substr($lac,2,2)); $data[$init_pos - 18]= pack("H*",substr($lac,4,2)); $data[$init_pos - 17]= pack("H*",substr($lac,6,2)); $data[$init_pos - 16]= pack("H*",substr($mnc,0,2)); $data[$init_pos - 15]= pack("H*",substr($mnc,2,2)); $data[$init_pos - 14]= pack("H*",substr($mnc,4,2)); $data[$init_pos - 13]= pack("H*",substr($mnc,6,2)); $data[$init_pos - 12]= pack("H*",substr($mcc,0,2)); $data[$init_pos - 11]= pack("H*",substr($mcc,2,2)); $data[$init_pos - 10]= pack("H*",substr($mcc,4,2)); $data[$init_pos - 9]= pack("H*",substr($mcc,6,2)); if ((hexdec($cid) > 0xffff) && ($mcc != "00000000") && ($mnc != "00000000")) { $data[$init_pos - 27] = chr(5); } else { $data[$init_pos - 24]= chr(0); $data[$init_pos - 23]= chr(0); } $context = array ( 'http' => array ( 'method' => 'POST', 'header'=> "Content-type: application/binary\r\n" . "Content-Length: " . strlen($data) . "\r\n", 'content' => $data ) ); $xcontext = stream_context_create($context); $str=file_get_contents("http://www.google.com/glm/mmap",FALSE,$xcontext); if (strlen($str) > 10) { $lat_tmp = unpack("l",$str[10].$str[9].$str[8].$str[7]); $lat = $lat_tmp[1]/1000000; $lon_tmp = unpack("l",$str[14].$str[13].$str[12].$str[11]); $lon = $lon_tmp[1]/1000000; echo "Lat=$lat <br> Lon=$lon"; } else { echo "Not found!"; } ?> also I found this one http://code.google.com/intl/ru/apis/gears/geolocation_network_protocol.html , but it isn't php.

    Read the article

  • Workspace.PendEdit not checking out files

    - by MasterMax1313
    I'm using the TFS 2010 SDK to programmatically check in edits to files into TFS 2010. The documentation on the TFS 2010 SDK is sparse at best. When I call the method workspace.pendedit() passing in an array of files I want to mark as having a pending edit, nothing is actually checked out. So when I call workspace.checkin() passing in workspace.getpendingchanges and some comments I get an exception that there must be at least one thing that has a pending change (which should be what I passed into pendedit). Any thoughts on why the app isn't marking the files as having a pending edit in the workspace?

    Read the article

  • Creating a constant Dictionary in C#

    - by David Schmitt
    What is the most efficient way to create a constant (never changes at runtime) mapping of strings to ints? I've tried using a const Dictionary, but that didn't work out. I could implement a immutable wrapper with appropriate semantics, but that still doesn't seem totally right. For those who have asked, I'm implementing IDataErrorInfo in a generated class and am looking for a way to make the columnName lookup into my array of descriptors. I wasn't aware (typo when testing! d'oh!) that switch accepts strings, so that's what I'm gonna use. Thanks!

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Fancy dynamic list in Android: TableLayout vs ListView

    - by Ralkie
    There is a requirement to have not-so-trivial dynamic list, each record of which consists of several columns (texts, buttons). It should look something like: Text11 Text12 Button1 Button2 Text21 Text22 Button1 Button2 ... At first obvious way to accomplish that seemed to be TableLayout. I was expecting to have layout/styling data specified in res/layout/*.xml and to populate it with some dataset from java code (as with ListView, for which its possible to specify TextView of item in *.xml and bind it to some array using ArrayAdapter). But after playing for a while, all I found to be possible is fully populating TableLayout programatically. Still, creating TableRow by TableRow and setting layout attributes directly in java code doesn't seem elegant enough. So the questions are: Am I at the right path? Is TableLayout really best View to accomplish that? Maybe it's more appropriate to extend ListView or something else to meet such requirements? Is it possible to have TableLayout/TableRow template specified in *.xml and data bound to this template in java side?

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >