Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 374/457 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Help me avoid a resonance cascade.

    - by SLC
    Hi, my name is Dr. Kleiner, and I'm a senior scientist working at the Black Mesa Research Facility. I've just finished compiling my code to analyse a large unknown sample we've come across. Unfortunately, there were 19 build errors and 42 warnings, but I've been told the experiment must go ahead. Time is critical, we've already got one of our newest employees who is suiting up as I type this to complete the experiment. I really need some help. Can you think of anything last minute to stop a potential resonance cascade? Someone has hidden my glasses again... Anyway, I hope I never see a resonance cascade, and definitely don't want to create one. It's my lunch break in 5 minutes, and my casserole is already in the microwave ready. Please, give me some advice. If it helps I wrote all of the code to analyse the sample and activate the sampler machine in BASIC. Edit: Oh god! They're everywhere! Send assista

    Read the article

  • Binding ListBox ItemCount to IvalueConverter

    - by Ben
    Hi All, I am fairly new to WPF so forgive me if I am missing something obvious. I'm having a problem where I have a collection of AggregatedLabels and I am trying to bind the ItemCount of each AggregatedLabel to the FontSize in my DataTemplate so that if the ItemCount of an AggregatedLabel is large then a larger fontSize will be displayed in my listBox etc. The part that I am struggling with is the binding to the ValueConverter. Can anyone assist? Many thanks! XAML Snippet <DataTemplate x:Key="TagsTemplate"> <WrapPanel> <TextBlock Text="{Binding Name, Mode=Default}" TextWrapping="Wrap" FontSize="{Binding ItemCount, Converter={StaticResource CountToFontSizeConverter}, Mode=Default}" Foreground="#FF0D0AF7"/> </WrapPanel> </DataTemplate> <ListBox x:Name="tagsList" ItemsSource="{Binding AggregatedLabels, Mode=Default}" ItemTemplate="{StaticResource TagsTemplate}" Style="{StaticResource tagsStyle}" Margin="200,10,16.171,11.88" /> AggregatedLabel Collection using (DB2DataReader dr = command.ExecuteReader()) { while (dr.Read()) { AggregatedLabelModel aggLabel = new AggregatedLabelModel(); aggLabel.ID = Convert.ToInt32(dr["LABEL_ID"]); aggLabel.Name = dr["LABEL_NAME"].ToString(); LabelData.Add(aggLabel); } } Converter public class CountToFontSizeConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { const int minFontSize = 6; const int maxFontSize = 38; const int increment = 3; int count = (int)value; if ((minFontSize + count + increment) < maxFontSize) { return minFontSize + count + increment; } return maxFontSize; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } AggregatedLabel Class public class AggregatedLabelModel { public int ID { get; set; } public string Name { get; set; } } CollectionView ListCollectionView labelsView = new ListCollectionView(LabelData); labelsView.GroupDescriptions.Add(new PropertyGroupDescription("Name"));

    Read the article

  • XSLT fails to load huge XML doc after matching certain elements

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • How to make a big switch control structure with variable check values?

    - by mystify
    For example, I have a huge switch control structure with a few hundred checks. They're an animation sequence, which is numbered from 0 to n. Someone said I can't use variables with switch. What I need is something like: NSInteger step = 0; NSInteger i = 0; switch (step) { case i++: // do stuff break; case i++: // do stuff break; case i++: // do stuff break; case i++: // do stuff break; } The point of this is, that the animation system calls a method with this big switch structure, giving it a step number. I want to be able to simply cut-copy-paste large blocks and put them in a different position inside the switch. for example, the first 50 blocks to the end. I could do that easily with a huge if-else structure, but it would look ugly and something tells me switch is much faster. How to?

    Read the article

  • Quickly determine if a number is prime in Python for numbers < 1 billion

    - by Frór
    Hi, My current algorithm to check the primality of numbers in python is way to slow for numbers between 10 million and 1 billion. I want it to be improved knowing that I will never get numbers bigger than 1 billion. The context is that I can't get an implementation that is quick enough for solving problem 60 of project Euler: I'm getting the answer to the problem in 75 seconds where I need it in 60 seconds. http://projecteuler.net/index.php?section=problems&id=60 I have very few memory at my disposal so I can't store all the prime numbers below 1 billion. I'm currently using the standard trial division tuned with 6k±1. Is there anything better than this? Do I already need to get the Rabin-Miller method for numbers that are this large. primes_under_100 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def isprime(n): if n <= 100: return n in primes_under_100 if n % 2 == 0 or n % 3 == 0: return False for f in range(5, int(n ** .5), 6): if n % f == 0 or n % (f + 2) == 0: return False return True How can I improve this algorithm?

    Read the article

  • How do I pass a lot of parameters to views in Dango?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

  • iPhone Multithreaded Search

    - by Kulpreet
    I'm sort of new to any sort of multithreading and simply can't seem to get a simple search method working on a background thread properly. Everything seems to be in order with an NSAutoreleasePool and the UI being updated on the main thread. The app doesn't crash and does perform a search in the background but the search results yield the several of the same items several times depending on how fast I type it in. The search works properly without the multithreading (which is commented out), but is very slow because of the large amounts of data I am working with. Here's the code: - (void)filterContentForSearchText:(NSString*)searchText { isSearching = YES; NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; /* Update the filtered array based on the search text and scope. */ //[self.filteredListContent removeAllObjects]; // First clear the filtered array. for (Entry *entry in appDelegate.entries) { NSComparisonResult result = [entry.gurmukhiEntry compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; if (result == NSOrderedSame) { [self.filteredListContent addObject:entry]; } } [self.searchDisplayController.searchResultsTableView performSelectorOnMainThread:(@selector(reloadData)) withObject:nil waitUntilDone:NO]; //[self.searchDisplayController.searchResultsTableView reloadData]; [apool drain]; isSearching = NO; } - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { if (!isSearching) { [self.filteredListContent removeAllObjects]; // First clear the filtered array. [self performSelectorInBackground:(@selector(filterContentForSearchText:)) withObject:searchString]; } //[self filterContentForSearchText:searchString]; return NO; // Return YES to cause the search result table view to be reloaded. }

    Read the article

  • x86 .net application with system.OutOfMemoryException

    - by Allen
    Hi,Guys I got OutOfMemoryException after the app running for 1 day, the app totally use 1.5G memory , all consumed by managed heap, gen 2 used 200mb , and LOB used 1.3mb, however the weired thing is, 900mb of space are Free. from perf counter I saw there had number of gen 2 gc collection happened, why GC collector cannot collect those 900mb free space in gen2 and LOB? I'm really appreicate for your help. following info are from windbg: 0:000> !eeheap -gc Number of GC Heaps: 1 generation 0 starts at 0x183153f0 generation 1 starts at 0x182aa834 generation 2 starts at 0x02131000 ephemeral segment allocation context: none segment begin allocated size 02130000 02131000 0312f284 0xffe284(16769668) 07750000 07751000 0874fc5c 0xffec5c(16772188) 09e30000 09e31000 0ae2fc2c 0xffec2c(16772140) 0b230000 0b231000 0c22ffec 0xffefec(16773100) 0c230000 0c231000 0d22f6f0 0xffe6f0(16770800) 0d230000 0d231000 0e22ea10 0xffda10(16767504) 0e230000 0e231000 0f22c1c4 0xffb1c4(16757188) 10390000 10391000 1138ddf4 0xffcdf4(16764404) 154e0000 154e1000 164da90c 0xff990c(16750860) 34aa0000 34aa1000 35a9dbfc 0xffcbfc(16763900) 7aca0000 7aca1000 7bc9edfc 0xffddfc(16768508) 49760000 49761000 4a75ef64 0xffdf64(16768868) 7bca0000 7bca1000 7cc99bac 0xff8bac(16747436) 17a70000 17a71000 183313fc 0x8c03fc(9176060) Large object heap starts at 0x03131000 segment begin allocated size 03130000 03131000 041250c8 0xff40c8(16728264) 08920000 08921000 099102f8 0xfef2f8(16708344) .... .... 4c760000 4c761000 4d71d578 0xfbc578(16500088) 1bb10000 1bb11000 1ca110d0 0xf000d0(15728848) 57760000 57761000 5862d7f8 0xecc7f8(15517688) Total Size: Size: 0x5ab13450 (1521562704) bytes. ------------------------------ GC Heap Size: Size: 0x5ab13450 (1521562704) bytes. 0:000> !dumpheap -stat total 0 objects Statistics: MT Count TotalSize Class Name 73037c78 1 12 System.Configuration.GenericEnumConverter 73036da0 1 12 System.Configuration.InfiniteIntConverter .... .... 69161c3c 35025 6809420 System.Windows.EffectiveValueEntry[] 69164748 54 12471072 MS.Internal.WeakEventTable+EventKey[] 710e2228 9540 190389260 System.Byte[] 710dd2b8 1317031 339257932 System.String 0035a670 6427 902224056 Free Total 3615631 objects

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • How much should the AppDelegate do?

    - by Rudiger
    I'm designing quite a large App and on startup it will create sessions with a few different servers. As they are creating a session which is used across all parts of the app its something I thought would be best in App Delegate. But the problem is I need the session progress to be represented on the screen. I plan to have a UIToolBar at the bottom of the main menu which I don't want to cover with the progress bar but cover the UIView above it.So the way I see it I could do it a few different ways. 1) Have the App Delegate establish the sessions and report the progress to the main menu class so it can represent it in the progress bar (will I have any issues doing this if the sessions are created in a separate thread?), 2) have the App delegate display the main menu (UIView with a bunch of buttons and UIToolBar) and have it track and display the progress (I have never displayed anything in the App Delegate but assume you can do this but its not recommended) or 3) have the App Delegate just push the main menu and have the mainMenu class create the sessions and display the progress bar. 4) I think the other way to do it is to create the sessions in a delegate class and have the delegate set to mainMenu rather than self (AppDelegate), although I've never used anything other then self so not sure if this will work or if I will be able to close the thread (through calling super maybe?) as its running in the AppDelegate rather than the delegate of the class. As I've kinda said before the sessions are being created in a class in a separate thread so it wont lock the UI and I think the best way is the first but am I going to have issues having it running in a separate thread, reporting back to the app delegate and then sending that message to the mainMenu view? I hope that all makes sense, let me know if you need any further clarification. Any information is appreciated Cheers,

    Read the article

  • Alternate widgets and logic for ManyToManyField with Django forms

    - by Jaearess
    In my Django project, I have a simple ticket system. When creating a ticket, certain users have the ability to assign the ticket to other users, and to email the ticket to other users as well (this is used as an FYI for those users, so they're aware of the ticket, even though it's not assigned to them.) At the moment, the form for adding a ticket is simply the default Django form, with the "assigned_to" and "email_to" fields being ManyToManyFields, and therefore displayed as MultipleSelect widgets, each with a list of all users. Due to the relatively large number of users, and general awkwardness of the MultipleSelect widget, and alternate layout is now required. The desired layout is a pair of simple Select widgets side-by-side. The first has the option of "Assign to" or "Email to" and the second is a list of the users. Essentially, like this: [Assign to] [John Doe] [Email to] [Jane Roe] [Jack Smith], etc. Of course, since an arbitrary number of users can be assigned or emailed a ticket, there's a simple button that runs some Javascript to add another set of widgets, to allow the user to assign and email as many people as they need to. So far all of that is fairly simple and straight forward. However, the problem I have is using this widget setup/logic setup with Django forms. Instead of lists of users to assign to and email, instead we're getting back pairs of information, one a user and the other which list that user should be placed in. What I'm looking for, but have yet to find, is a way to offload the translation between how the user uses the form, and how Django understands the model to the form itself, so I don't have to manually do the processing of the data before passing it to the form in each place this form is used. Additionally, there's a review screen with the option to go back and change the form before submitting it, so a way to have the form translate both to and from this format would be extremely helpful.

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Is this OleDbDataAdapter bug

    - by ????
    It doesn't look to me OleDbDataAdapter should throw an exception on trying to fill DataSet for a db table column of type decimal(28,3). The message is "The numerical value is too large to fit into a 96 bit decimal". Could you just check this, I have no significant experience with ADO.NET and OLE DB components? The VB.NET code we have in the application is this: Dim dbDataSet As New DataSet Dim dbDataAdapter As OleDbDataAdapter Dim dbCommand As OleDbCommand Dim conn As OleDbConnection Dim connectionString As String 'parts where connectionString is set conn = New OleDbConnection(connectionString) 'part where sqlQuery is set but it ends up being "SELECT Price As 'Price' From PricebookView" - Price is of type decimal(28,3) dbCommand = New OleDbCommand(sqlQuery, conn) dbCommand.CommandTimeout = cmdTimeout dbDataAdapter = New OleDbDataAdapter(dbCommand) dbDataAdapter.Fill(dbDataSet) The last line is where the exception is thrown and the top of the stack trace is: at System.Data.ProviderBase.DbBuffer.ReadNumeric(Int32 offset) at System.Data.OleDb.ColumnBinding.Value_NUMERIC() at System.Data.OleDb.ColumnBinding.Value() at System.Data.OleDb.OleDbDataReader.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) ... I am not sure why does it try to set the value to Int32. Thank you for the time !

    Read the article

  • I need to implement C# deep copy constructors with inheritance. What patterns are there to choose fr

    - by Tony Lambert
    I wish to implement a deepcopy of my classes hierarchy in C# public Class ParentObj : ICloneable { protected int myA; public virtual Object Clone () { ParentObj newObj = new ParentObj(); newObj.myA = theObj.MyA; return newObj; } } public Class ChildObj : ParentObj { protected int myB; public override Object Clone ( ) { Parent newObj = this.base.Clone(); newObj.myB = theObj.MyB; return newObj; } } This will not work as when Cloning the Child only a parent is new-ed. In my code some classes have large hierarchies. What is the recommended way of doing this? Cloning everything at each level without calling the base class seems wrong? There must be some neat solutions to this problem, what are they? Can I thank everyone for their answers. It was really interesting to see some of the approaches. I think it would be good if someone gave an example of a reflection answer for completeness. +1 awaiting!

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Get node content from xml file and transform it into php array?

    - by Kirzilla
    Hello, I'm looking for solution to extract some node from large xml file (using xmlstarlet http://xmlstar.sourceforge.net/) and then parse this node to php array. elements.xml <?xml version="1.0"?> <elements> <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> <element id="2" par1="val2_1" par2="val2_2" par3="val2_3"> <title>element 2 title</title> <description>element 2 description</description> </element> </elements> To extract element tag with id="1" using xmlstarlet I'm executing this shell command... xmlstarlet sel -t -c "/elements/element[id=1]" elements.xml This shell command outputs something like this... <element id="1" par1="val1_1" par2="val1_2" par3="val1_3"> <title>element 1 title</title> <description>element 1 description</description> </element> How could I parse this shell output into php array? Thank you.

    Read the article

  • Best way to split a string by word (SQL Batch separator)

    - by Paul Kohler
    I have a class I use to "split" a string of SQL commands by a batch separator - e.g. "GO" - into a list of SQL commands that are run in turn etc. ... private static IEnumerable<string> SplitByBatchIndecator(string script, string batchIndicator) { string pattern = string.Concat("^\\s*", batchIndicator, "\\s*$"); RegexOptions options = RegexOptions.Compiled | RegexOptions.IgnoreCase | RegexOptions.Multiline; foreach (string batch in Regex.Split(script, pattern, options)) { yield return batch.Trim(); } } My current implementation uses a Regex with yield but I am not sure if it's the "best" way. It should be quick It should handle large strings (I have some scripts that are 10mb in size for example) The hardest part (that the above code currently does not do) is to take quoted text into account Currently the following SQL will incorrectly get split: var batch = QueryBatch.Parse(@"-- issue... insert into table (name, desc) values('foo', 'if the go is on a line by itself we have a problem...')"); Assert.That(batch.Queries.Count, Is.EqualTo(1), "This fails for now..."); I have thought about a token based parser that tracks the state of the open closed quotes but am not sure if Regex will do it. Any ideas!?

    Read the article

  • Converting currencies via intermediate currencies.

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • Applying for .net jobs as a "self learner"

    - by DeanMc
    Hi All, I have recently started applying for .Net jobs. I currently work in a sales role with a large telco. I found out quite late that I like programming and as such bought my house and made commitments that mean college is not an option. What I would like to know is, is it harder to get a junior job as a self learner? I have gotten a few enquiries regarding my C.V but nothing concrete yet. I try to be involved in projects as I get the chance and tend to put up any worthwhile projects as I develop them. Some examples of my work are: A Xaml lexer and parser: http://www.xlight.mendhak.com A font obfuscation tool: http://www.silverlightforums.com/showthread.php?1516-Font-Obsfucation-Tool-ALPHA A tagger for m4a: http://projectaudiophile.codeplex.com/SourceControl/list/changesets I, of course think that these are great examples of my work but that is my opinion based on self learning. The other query is how much should I actually know? I've never used linked lists but I know that strings are immutable and I understand what that means. I am only touching on T-SQL but I understand things like how properties function in IL (as two standard methods :) ). I suppose I understand a lot of concepts but specific features need some looking up to implement as I may not know the syntax off the top of my head.

    Read the article

  • Get tag name and value of a given node using XMLReader, DOM, Xpath

    - by rossjha
    I need to query an xml document and then display specific tag values, e.g. forename, surname, group(dept), job_title. I'm using XMLReader as i may need to work with large XML files. I using DomXPath to filter the data, but i don't know how to retrieve the nodeName and value for each element. The code below only returns 'member' as the node name? Any help would be appreciated. <?php $reader = new XMLReader(); $reader->open('include/staff.xml'); while ($reader->read()){ switch($reader->nodeType){ case(XMLREADER::ELEMENT): if($reader->localName === 'staff'){ $node = $reader->expand(); $dom = new DomDocument(); $dom->formatOutput = true; $n = $dom->importNode($node, true); $dom->appendChild($n); $xp = new DomXpath($dom); $res = $xp->query("/staff/member[groups='HR']"); } } } echo $res->item(0)->nodeName; echo $res->item(0)->nodeValue; ?>

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >