Search Results

Search found 19554 results on 783 pages for 'xml pull parser'.

Page 598/783 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • Adding a link to image breaks jquery slider, help please!

    - by lewisqic
    Hi All, I am working on a site for a friend and trying to add a link to each image in a slide show. The slide show uses jquery and a script called easySlider1.7.js. Everything works just fine if the images are left alone, but when I try and wrap the images with a link, the slides no longer progress as they should. Here is the live website that is displaying the problem... http://www.splits59.com/ Here is the html code for each of the slide images... <div class="covercontent2box"> <div class="covercontent2"> <div id="slideshow"> <a href="/ashton-jacket-p-107.html"><img src="img/homepagesummer2010_2.2-cut.jpg" class="active" /></a> <a href="/carly-coverup-p-106.html"><img src="img/homepagesummer2010_3.2-cut.jpg" /></a> <a href="/shop-online-bottoms-c-15_14.html"><img src="img/homepagesummer2010_1.2-cut.jpg" /></a> </div> </div> </div> And here is the jquery code on the page that controls the slider... <script type="text/javascript"> function slideSwitch() { var $active = $('#slideshow IMG.active'); if ( $active.length == 0 ) $active = $('#slideshow IMG:last'); // use this to pull the images in the order they appear in the markup var $next = $active.next().length ? $active.next() : $('#slideshow IMG:first'); $active.addClass('last-active'); $next.css({opacity: 0.0}) .addClass('active') .animate({opacity: 1.0}, 1000, function() { $active.removeClass('active last-active'); }); } $(function() { setInterval( "slideSwitch()", 4500 ); }); </script> Does anyone have any idea why adding a link tag to the images breaks the slider? Any help or direction provided would be appreciated. Thanks, Devin

    Read the article

  • dojo layout tutorial dor version 1.7 doesn't work for 1.7.2

    - by Sheena
    This is sortof a continuation to dojo1.7 layout acting screwy. So I made some working widgets and tested them out, i then tried altering my work using the tutorial at http://dojotoolkit.org/documentation/tutorials/1.7/dijit_layout/ to make the layout nice. After failing at that in many interesting ways (thus my last question) I started on a new path. My plan is now to implement the layout tutorial example and then stick in my widgets. For some reason even following the tutorial wont work... everything loads then disappears and I'm left with a blank browser window. Any ideas? It just struck me that it could be browser compatibility issues, I'm working on Firefox 13.0.1. As far as I know Dojo is supposed to be compatible with this... anyway, have some code: HTML: <body class="claro"> <div id="appLayout" class="demoLayout" data-dojo-type="dijit.layout.BorderContainer" data-dojo-props="design: 'headline'"> <div class="centerPanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'center'"> <div> <h4>Group 1 Content</h4> <p>stuff</p> </div> <div> <h4>Group 2 Content</h4> </div> <div> <h4>Group 3 Content</h4> </div> </div> <div class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'top'"> Header content (top) </div> <div id="leftCol" class="edgePanel" data-dojo-type="dijit.layout.ContentPane" data-dojo-props="region: 'left', splitter: true"> Sidebar content (left) </div> </div> </body> Dojo Configuration: var dojoConfig = { baseUrl: "${request.static_url('mega:static/js')}", //this is in a mako template tlmSiblingOfDojo: false, packages: [ { name: "dojo", location: "libs/dojo" }, { name: "dijit", location: "libs/dijit" }, { name: "dojox", location: "libs/dojox" }, ], parseOnLoad: true, has: { "dojo-firebug": true, "dojo-debug-messages": true }, async: true }; other js stuff: require(["dijit/layout/BorderContainer", "dijit/layout/TabContainer", "dijit/layout/ContentPane", "dojo/parser"]); css: html, body { height: 100%; margin: 0; overflow: hidden; padding: 0; } #appLayout { height: 100%; } #leftCol { width: 14em; }

    Read the article

  • Memory allocation and release for UIImage in iPhone?

    - by rkbang
    Hello all, I am using following code in iPhone to get smaller cropped image as follows: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; UIImage *tempImg = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); return [tempImg autorelease]; } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } I am using following line of code in cellForRowAtIndexPath to update the image of the cell: cell.img.image = [self imageByCropping:[self getSmallImage:[UIImage imageNamed:@"goal_image.png"]] toRect:CGRectMake(0, 0, 36, 36)]; Now when I add this table view and pop it from navigation controller, I see a memory hike.I see no leaks but memory keeps climbing. Please note that the images changes for each row and I am creating the controller using lazy initialization that is I create or alloc it whenever I need it. I saw on internet many people facing the same issue, but very rare good solutions. I have multiple views using the same way and I see almost memory raised to 4MB within 20-25 view transitions. What is the good solution to resolve this issue. tnx.

    Read the article

  • Need help accessing PHP DOM elements

    - by Michael Pasqualone
    Hey guys, I have the following HTML structure that I am trying to pull information from: // Product 1 <div class="productName"> <span id="product-name-1">Product Name 1</span> </div> <div class="productDetail"> <span class="warehouse">Warehouse 1, ACT</span> <span class="quantityInStock">25</span> </div> // Product 2 <div class="productName"> <span id="product-name-2">Product Name 2</span> </div> <div class="productDetail"> <span class="warehouse">Warehouse 2, ACT</span> <span class="quantityInStock">25</span> </div> … // Product X <div class="productName"> <span id="product-name-X">Product Name X</span> </div> <div class="productDetail"> <span class="warehouse">Warehouse X, ACT</span> <span class="quantityInStock">25</span> </div> I don't have control of the source html and as you'll see productName and it's accompanying productDetail are not contained within a common element. Now, I am using the following php code to try and parse the page. $html = new DOMDocument(); $html->loadHtmlFile('product_test.html'); $xPath = new DOMXPath($html); $domQuery = '//div[@class="productName"]|//div[@class="productDetail"]'; $entries = $xPath->query($domQuery); foreach ($entries as $entry) { echo "Detail: " . $entry->nodeValue) . "<br />\n"; } Which prints the following: Detail: Product Name 1 Detail: Warehouse 1, ACT Detail: 25 Detail: Product Name 2 Detail: Warehouse 2, ACT Detail: 25 Detail: Product Name X Detail: Warehouse X, ACT Detail: 25 Now, this is close to what I want. But I need to do some processing on each Product, Warehouse and Quantity stock and can't figure out how to parse it out into separate product groups. The final output I am after is something like: Product 1: Name: Product Name 1 Warehouse: Warehouse 1, ACT Stock: 25 Product 2: Name: Product Name 2 Warehouse: Warehouse 2, ACT Stock: 25 I can't just figure it out, and I can't wrap my head around this DOM stuff as the elements don't quite work the same as a standard array. If anyone can assist, or point me in the right direction I will be ever appreciative.

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Poor man's "lexer" for C#

    - by Paul Hollingsworth
    I'm trying to write a very simple parser in C#. I need a lexer -- something that lets me associate regular expressions with tokens, so it reads in regexs and gives me back symbols. It seems like I ought to be able to use Regex to do the actual heavy lifting, but I can't see an easy way to do it. For one thing, Regex only seems to work on strings, not streams (why is that!?!?). Basically, I want an implementation of the following interface: interface ILexer : IDisposable { /// <summary> /// Return true if there are more tokens to read /// </summary> bool HasMoreTokens { get; } /// <summary> /// The actual contents that matched the token /// </summary> string TokenContents { get; } /// <summary> /// The particular token in "tokenDefinitions" that was matched (e.g. "STRING", "NUMBER", "OPEN PARENS", "CLOSE PARENS" /// </summary> object Token { get; } /// <summary> /// Move to the next token /// </summary> void Next(); } interface ILexerFactory { /// <summary> /// Create a Lexer for converting a stream of characters into tokens /// </summary> /// <param name="reader">TextReader that supplies the underlying stream</param> /// <param name="tokenDefinitions">A dictionary from regular expressions to their "token identifers"</param> /// <returns>The lexer</returns> ILexer CreateLexer(TextReader reader, IDictionary<string, object> tokenDefinitions); } So, pluz send the codz... No, seriously, I am about to start writing an implementation of the above interface yet I find it hard to believe that there isn't some simple way of doing this in .NET (2.0) already. So, any suggestions for a simple way to do the above? (Also, I don't want any "code generators". Performance is not important for this thing and I don't want to introduce any complexity into the build process.)

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • IEnumerable<T> ToArray usage, is it a copy or a pointer?

    - by Daniel
    I am parsing an arbitrary length byte array that is going to be passed around to a few different layers of parsing. Each parser creates a Header and a Packet payload just like any ordinary encapsulation. And my problem lies in how the encapsulation holds its packet byte array payload. Say i have a 100 byte array, and it has 3 levels of encapsulation. 3 packet objects will be created and i want to set the payload of these packets to the corresponding position in the byte array of the packet. For example lets say the payload size is 20 for all levels, then imagine it has a public byte[] Payload on each object. However the problem is that this byte[] Payload is a copy of the original 100 bytes. So i'm going to end up with 160 bytes in memory instead of 100. If it were in c++ i could just easily use a pointer however i'm writing this in c#. So i created the following class: public class PayloadSegment<T> : IEnumerable<T> { public readonly T[] Array; public readonly int Offset; public readonly int Count; public PayloadSegment(T[] array, int offset, int count) { this.Array = array; this.Offset = offset; this.Count = count; } public T this[int index] { get { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else return Array[Offset + index]; } set { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else Array[Offset + index] = value; } } public IEnumerator<T> GetEnumerator() { for (int i = Offset; i < Offset + Count; i++) yield return Array[i]; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { IEnumerator<T> enumerator = this.GetEnumerator(); while (enumerator.MoveNext()) { yield return enumerator.Current; } } } This way i can simply reference a position inside the original byte array but use positional indexing. However if i do something like: PayloadSegment<byte> something = new PayloadSegment<byte>(someArray, 5, 10); byte[] somethingArray = something.ToArray(); Will the somethingArray be a copy of the bytes, or a reference to the original PayloadSegment which in turn is a reference to the original byte array? Sorry it was hard to word this lol _<

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • Change the Session Variable Output

    - by user567230
    Hello I am using Dreamweaver CS5 with Coldfusion 9 to build a dynamic website. I have a MS Access Database that stores login information which includes ID, FullName, FirstName, LastName, Username, Pawword, AcessLevels. My question is this: I currently have session variable to track the Username when it is entered into the login page. However I would like to use that Username to pull the User's FullName to display throughout the web pages and use for querying data. How do I change the session variable to read that when they are not entering their FullName on the login page but only Username and password. I have listed my login information code below if there is any additional information needed please let me know. This is the path for which the FullName values reside DataSource "Access" Table "Logininfo" Field "FullName" I want the FullName to be unique based on the Username submitted from the Login page. I apologize in advance for any rookie mistake I may have made I am new to this but learning fast! Ha. <cfif IsDefined("FORM.username")> <cfset MM_redirectLoginSuccess="members_page.cfm"> <cfset MM_redirectLoginFailed="sorry.cfm"> <cfquery name="MM_rsUser" datasource="Access"> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username=<cfqueryparam value="#FORM.username#" cfsqltype="cf_sql_clob" maxlength="50"> AND Password=<cfqueryparam value="#FORM.password#" cfsqltype="cf_sql_clob" maxlength="50"> </cfquery> <cfif MM_rsUser.RecordCount NEQ 0> <cftry> <cflock scope="Session" timeout="30" type="Exclusive"> <cfset Session.MM_Username=FORM.username> <cfset Session.MM_UserAuthorization=MM_rsUser.AccessLevels[1]> </cflock> <cfif IsDefined("URL.accessdenied") AND false> <cfset MM_redirectLoginSuccess=URL.accessdenied> </cfif> <cflocation url="#MM_redirectLoginSuccess#" addtoken="no"> <cfcatch type="Lock"> <!--- code for handling timeout of cflock ---> </cfcatch> </cftry> </cfif> <cflocation url="#MM_redirectLoginFailed#" addtoken="no"> <cfelse> <cfset MM_LoginAction=CGI.SCRIPT_NAME> <cfif CGI.QUERY_STRING NEQ ""> <cfset MM_LoginAction=MM_LoginAction & "?" & XMLFormat(CGI.QUERY_STRING)> </cfif> </cfif>

    Read the article

  • SQL Invalid Object Name 'AddressType'

    - by salvationishere
    I am getting the above error in my VS 2008 C# method when I try to invoke the SQL getColumnNames stored procedure from VS. This SP accepts one input parameter, the table name, and works successfully from SSMS. Currently I am selecting the AdventureWorks AddressType table for it to pull the column names from this table. I can see teh AdventureWorks table available in VS from my Server Explorer / Data Connection. And I see both the AddressType table and getColumnNames SP showing in Server Explorer. But I am still getting this error listed above. Here is the C# code snippet I use to execute this: public static DataTable DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Data Source=.;AttachDbFilename=\"C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\AdventureWorks_Data.mdf\";Initial Catalog=AdventureWorks;Integrated Security=True;Connect Timeout=30;User Instance=False"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); try { cmd.CommandText = "dbo.getColumnNames"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = TableName; parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } And when I examine the errorMsg it says the following: " at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()\r\n at System.Data.SqlClient.SqlDataReader.get_MetaData()\r\n at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader()\r\n at ADONET_namespace.ADONET_methods.DisplayTableColumns(String tt) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 35" Where line 35 is dr = cmd.ExecuteReader();

    Read the article

  • Creating multiple heads in remote repository

    - by Jab
    We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution. We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features. The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course. So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo. My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages... First, the teams want to setup Continuous Integration to build this branch during their development cycle(months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy. Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach. And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence(as they do today, in their SVN branches). Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy. For bug fixes we like the general workflow of mecurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases. By the way, I've read this , great article which seems to favor Named branches.

    Read the article

  • dynamically embedding youtube videos with jquery

    - by danwoods
    Hello all, I'm trying to retrieve a listing of a user's youtube videos and embed them in a page using jQuery. My code looks something like this: $(document).ready(function() { //some variables var fl_obj_template = $('<object width="260" height="140">' + '<param name="movie" value=""></param>' + '<param name="allowFullScreen" value="true"></param>' + '<param name="allowscriptaccess" value="always"></param>' + '<embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="260" height="140"></embed>' + '</object>'); var video_elm_arr = $('.video'); //hide videos until ready $('.video').addClass('hidden'); //pull video data from youtube $.ajax({ url: 'http://gdata.youtube.com/feeds/api/users/username/uploads?alt=json', dataType: 'jsonp', success: function(data) { $.each(data.feed.entry, function(i,item){ //only take the first 7 videos if(i > 6) return; //give the video element a flash object var cur_flash_obj = fl_obj_template; //assign title $(video_elm_arr[i]).find('.video_title').html(item.title.$t); //clean url var video_url = item.media$group.media$content[0].url; var index = video_url.indexOf("?"); if (index > 0) video_url = video_url.substring(0, index); //and asign it to the player's parameters $(cur_flash_obj).find('object param[name="movie"]').attr('value', video_url); $(cur_flash_obj).find('object embed').attr('src', video_url); //alert(cur_flash_obj); //insert flash object in video element $(video_elm_arr[i]).append(cur_flash_obj); //and show $(video_elm_arr[i]).removeClass('hidden'); }); } }); }); (of course with 'username' being the actual username). The video titles appear correctly but no videos show up. What gives? The target html looks like: <div id="top_row_center" class="video_center video"> <p class="video_title"></p> </div>

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • GHC.Generics and Type Families

    - by jberryman
    This is a question related to my module here, and is simplified a bit. It's also related to this previous question, in which I oversimplified my problem and didn't get the answer I was looking for. I hope this isn't too specific, and please change the title if you can think if a better one. Background My module uses a concurrent chan, split into a read side and write side. I use a special class with an associated type synonym to support polymorphic channel "joins": {-# LANGUAGE TypeFamilies #-} class Sources s where type Joined s newJoinedChan :: IO (s, Messages (Joined s)) -- NOT EXPORTED --output and input sides of channel: data Messages a -- NOT EXPORTED data Mailbox a instance Sources (Mailbox a) where type Joined (Mailbox a) = a newJoinedChan = undefined instance (Sources a, Sources b)=> Sources (a,b) where type Joined (a,b) = (Joined a, Joined b) newJoinedChan = undefined -- and so on for tuples of 3,4,5... The code above allows us to do this kind of thing: example = do (mb , msgsA) <- newJoinedChan ((mb1, mb2), msgsB) <- newJoinedChan --say that: msgsA, msgsB :: Messages (Int,Int) --and: mb :: Mailbox (Int,Int) -- mb1,mb2 :: Mailbox Int We have a recursive action called a Behavior that we can run on the messages we pull out of the "read" end of the channel: newtype Behavior a = Behavior (a -> IO (Behavior a)) runBehaviorOn :: Behavior a -> Messages a -> IO () -- NOT EXPORTED This would allow us to run a Behavior (Int,Int) on either of msgsA or msgsB, where in the second case both Ints in the tuple it receives actually came through separate Mailboxes. This is all tied together for the user in the exposed spawn function spawn :: (Sources s) => Behavior (Joined s) -> IO s ...which calls newJoinedChan and runBehaviorOn, and returns the input Sources. What I'd like to do I'd like users to be able to create a Behavior of arbitrary product type (not just tuples) , so for instance we could run a Behavior (Pair Int Int) on the example Messages above. I'd like to do this with GHC.Generics while still having a polymorphic Sources, but can't manage to make it work. spawn :: (Sources s, Generic (Joined s), Rep (Joined s) ~ ??) => Behavior (Joined s) -> IO s The parts of the above example that are actually exposed in the API are the fst of the newJoinedChan action, and Behaviors, so an acceptable solution can modify one or all of runBehaviorOn or the snd of newJoinedChan. I'll also be extending the API above to support sums (not implemented yet) like Behavior (Either a b) so I hoped GHC.Generics would work for me. Questions Is there a way I can extend the API above to support arbitrary Generic a=> Behavior a? If not using GHC's Generics, are there other ways I can get the API I want with minimal end-user pain (i.e. they just have to add a deriving clause to their type)?

    Read the article

  • Fixed strptime exception with thread lock, but slows down the program

    - by eWizardII
    I have the following code, which when is running inside of a thread (the full code is here - https://github.com/eWizardII/homobabel/blob/master/lovebird.py) for null in range(0,1): while True: try: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='w') as f: f.write('[') threadLock.acquire() for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): if i>0: f.write(", ") f.write("%s" % (json.dumps(dict(sc=seed.author.statuses_count)))) j = j + 1 threadLock.release() f.write("]") except tweepy.TweepError, e: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='a') as f: f.write("]") print "ERROR on " + str(self.ip) + " Reason: ", e with open('C:/Twitter/errors_0.txt', mode='a') as a_file: new_ii = "ERROR on " + str(self.ip) + " Reason: " + str(e) + "\n" a_file.write(new_ii) break Now without the thread lock I generate the following error: Exception in thread Thread-117: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 530, in __bootstrap_inner self.run() File "C:/Twitter/homobabel/lovebird.py", line 62, in run for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 110, in next self.current_page = self.page_iterator.next() File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 85, in next items = self.method(page=self.current_page, *self.args, **self.kargs) File "build\bdist.win-amd64\egg\tweepy\binder.py", line 196, in _call return method.execute() File "build\bdist.win-amd64\egg\tweepy\binder.py", line 182, in execute result = self.api.parser.parse(self, resp.read()) File "build\bdist.win-amd64\egg\tweepy\parsers.py", line 75, in parse result = model.parse_list(method.api, json) File "build\bdist.win-amd64\egg\tweepy\models.py", line 38, in parse_list results.append(cls.parse(api, obj)) File "build\bdist.win-amd64\egg\tweepy\models.py", line 49, in parse user = User.parse(api, v) File "build\bdist.win-amd64\egg\tweepy\models.py", line 86, in parse setattr(user, k, parse_datetime(v)) File "build\bdist.win-amd64\egg\tweepy\utils.py", line 17, in parse_datetime date = datetime(*(time.strptime(string, '%a %b %d %H:%M:%S +0000 %Y')[0:6])) File "C:\Python27\lib\_strptime.py", line 454, in _strptime_time return _strptime(data_string, format)[0] File "C:\Python27\lib\_strptime.py", line 300, in _strptime _TimeRE_cache = TimeRE() File "C:\Python27\lib\_strptime.py", line 188, in __init__ self.locale_time = LocaleTime() File "C:\Python27\lib\_strptime.py", line 77, in __init__ raise ValueError("locale changed during initialization") ValueError: locale changed during initialization The problem is with thread lock on, each thread runs itself serially basically, and it takes way to long for each loop to run for there to be any advantage to having a thread anymore. So if there isn't a way to get rid of the thread lock, is there a way to have it run the for loop inside of the try statement faster?

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • Problem with JQuery swapImage();

    - by VUELA
    Hello!~ On my articles page (http://www.adrtimes.squarespace.com/articles) I have each entry start with an image that changes on rollover. This is working fine. However, on my homepage (http://www.adrtimes.squarespace.com), I am loading in the 2 most recent articles that are not categorized as video, and the 2 most recent articles that are tagged as video. I am using jQuery load() to pull in the content from the articles page. Everything works ok with that except the swapImage rollovers on the homepage. i tried including the swapImage function as callbacks but only one group or the other will rollover properly, not both groups of content. I'm not sure what the problem is!! Here is the code: <script type="text/javascript"> <!-- $(function(){ $.swapImage(".swapImage"); /* Homepage */ // load recent articles $("#LoadRecentArticles").load("/articles/ .list-journal-entry-wrapper .journal-entry-wrapper:not(.category-video)", function(){ //callback... $("#LoadRecentArticles .journal-entry-wrapper").wrapAll('<div class="list-journal-entry-wrapper" />'); $("#LoadRecentArticles .journal-entry-wrapper:gt(1)").remove(); // modify Read More tag $('.journal-read-more-tag a:contains(Click to read more ...)').each(function(){ var str = $(this).html(); $(this).html(str.replace('Click to read more ...','Continue reading—')); }); $.swapImage(".swapImage"); }); // load recent videos $("#LoadRecentVideos").load("/articles/category/video .list-journal-entry-wrapper", function(){ //callback... $("#LoadRecentVideos .journal-entry-wrapper:gt(1)").remove(); $('<div class="VideoTag">—video</div>').insertBefore("#LoadRecentVideos .category-video .body img"); // modify Read More tag $('.journal-read-more-tag a:contains(Click to read more ...)').each(function(){ var str = $(this).html(); $(this).html(str.replace('Click to read more ...','Continue reading—')); }); $.swapImage(".swapImage"); }); }); --> </script> And here is a sample of the html for the images in an article entry: <a href="/articles/2010/5/6/article-title-goes-here.html"><img class="swapImage {src: '/storage/post-images/Sample2_color.jpg'}" src="/storage/post-images/Sample2_bw.jpg" /></a>

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • How can I create a Searchstring for a Google AJAX Search API?

    - by elmaso
    Hello, i have this code to get the search resutls from the api: querygoogle.php: <?php session_start(); // Here's the Google AJAX Search API url for curl. It uses Google Search's site:www.yourdomain.com syntax to search in a specific site. I used $_SERVER['HTTP_HOST'] to find my domain automatically. Change $_POST['searchquery'] to your posted search query $url = 'http://ajax.googleapis.com/ajax/services/search/web?rsz=large&v=1.0&start=20&q=' . urlencode('' . $_POST['searchquery']); // use fopen and fread to pull Google's search results $handle = fopen($url, 'rb'); $body = ''; while (!feof($handle)) { $body .= fread($handle, 8192); } fclose($handle); // now $body is the JSON encoded results. We need to decode them. $json = json_decode($body); // now $json is an object of Google's search results and we need to iterate through it. foreach($json->responseData->results as $searchresult) { if($searchresult->GsearchResultClass == 'GwebSearch') { $formattedresults .= ' <div class="searchresult"> <h3><a href="' . $searchresult->unescapedUrl . '">' . $searchresult->titleNoFormatting . '</a></h3> <p class="resultdesc">' . $searchresult->content . '</p> <p class="resulturl">' . $searchresult->visibleUrl . '</p> </div>'; } } $_SESSION['googleresults'] = $formattedresults; header('Location: ' . $_SERVER['HTTP_REFERER']); exit; ?> search.php <?php session_start(); ?> <form method="post" action="querygoogle.php"> <label for="searchquery"><span class="caption">Search this site</span> <input type="text" size="20" maxlength="255" title="Enter your keywords and click the search button" name="searchquery" /></label> <input type="submit" value="Search" /> </form> <?php if(!empty($_SESSION['googleresults'])) { echo $_SESSION['googleresults']; unset($_SESSION['googleresults']); } ?> but with this code, I cant add a searchstring.. how can i add a search string like search.php?search=keyword ? thanks

    Read the article

  • C++ string sort like a human being?

    - by Walter Nissen
    I would like to sort alphanumeric strings the way a human being would sort them. I.e., "A2" comes before "A10", and "a" certainly comes before "Z"! Is there any way to do with without writing a mini-parser? Ideally it would also put "A1B1" before "A1B10". I see the question "Natural (human alpha-numeric) sort in Microsoft SQL 2005" with a possible answer, but it uses various library functions, as does "Sorting Strings for Humans with IComparer". Below is a test case that currently fails: #include <set> #include <iterator> #include <iostream> #include <vector> #include <cassert> template <typename T> struct LexicographicSort { inline bool operator() (const T& lhs, const T& rhs) const{ std::ostringstream s1,s2; s1 << toLower(lhs); s2 << toLower(rhs); bool less = s1.str() < s2.str(); std::cout<<s1.str()<<" "<<s2.str()<<" "<<less<<"\n"; return less; } inline std::string toLower(const std::string& str) const { std::string newString(""); for (std::string::const_iterator charIt = str.begin(); charIt!=str.end();++charIt) { newString.push_back(std::tolower(*charIt)); } return newString; } }; int main(void) { const std::string reference[5] = {"ab","B","c1","c2","c10"}; std::vector<std::string> referenceStrings(&(reference[0]), &(reference[5])); //Insert in reverse order so we know they get sorted std::set<std::string,LexicographicSort<std::string> > strings(referenceStrings.rbegin(), referenceStrings.rend()); std::cout<<"Items:\n"; std::copy(strings.begin(), strings.end(), std::ostream_iterator<std::string>(std::cout, "\n")); std::vector<std::string> sortedStrings(strings.begin(), strings.end()); assert(sortedStrings == referenceStrings); }

    Read the article

  • How do I create/use a Fluent NHibernate convention to automap UInt32 properties to an SQL Server 200

    - by dommer
    I'm trying to use a convention to map UInt32 properties to a SQL Server 2008 database. I don't seem to be able to create a solution based on existing web sources, due to updates in the way Fluent NHibernate works - i.e. examples are out of date. I'm trying to have NHibernate generate the schema (via ExposeConfiguration). I'm happy to have NHibernate map it to anything sensible (e.g. bigint). Here's my code as it currently stands (which, when I try to expose the schema, fails due to SQL Server not supporting UInt32). Apologies for the code being a little long, but I'm not 100% sure what is relevant to the problem, so I'm erring on the side of caution. Most of it is based on this post. The error reported is: System.ArgumentException : Dialect does not support DbType.UInt32 I think I'll need a relatively comprehensive example, as I don't seem to be able to pull the pieces together into a working solution, at present. FluentConfiguration configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(connectionString)) .Mappings(mapping => mapping.AutoMappings.Add( AutoMap.AssemblyOf<Product>() .Conventions.Add<UInt32UserTypeConvention>())); configuration.ExposeConfiguration(x => new SchemaExport(x).Create(false, true)); namespace NHibernateTest { public class UInt32UserTypeConvention : UserTypeConvention<UInt32UserType> { // Empty. } } namespace NHibernateTest { public class UInt32UserType : IUserType { // Public properties. public bool IsMutable { get { return false; } } public Type ReturnedType { get { return typeof(UInt32); } } public SqlType[] SqlTypes { get { return new SqlType[] { SqlTypeFactory.Int32 }; } } // Public methods. public object Assemble(object cached, object owner) { return cached; } public object DeepCopy(object value) { return value; } public object Disassemble(object value) { return value; } public new bool Equals(object x, object y) { return (x != null && x.Equals(y)); } public int GetHashCode(object x) { return x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { int? i = (int?)NHibernateUtil.Int32.NullSafeGet(rs, names[0]); return (UInt32?)i; } public void NullSafeSet(IDbCommand cmd, object value, int index) { UInt32? u = (UInt32?)value; int? i = (Int32?)u; NHibernateUtil.Int32.NullSafeSet(cmd, i, index); } public object Replace(object original, object target, object owner) { return original; } } }

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >