Search Results

Search found 4596 results on 184 pages for 'moderately interesting'.

Page 166/184 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Calling Save on a Configuration obj with a custom sectiongroup of type NameValueSectionHandler

    - by Allen
    I've got aConfiguration object that I can easily read and write settings from and call Save(ConfigurationSaveMode.Minimal, true) on and that typically works just fine. However, I now have a config file that has a custom sectionGroup defined of type System.Configuration.NameValueSectionHandler (I didn't write that part and I can't change it). Now when I call the Save method, it throws a TypeLoadException Message: Type System.Configuration.NameValueSectionHandler does not inherit from System.Configuration.ConfigurationSectionGroup. Callstack: at System.Configuration.TypeUtil.VerifyAssignableType (Type baseType, Type type, Boolean throwOnError) at System.Configuration.TypeUtil.GetConstructorWithReflectionPermission (Type type, Type baseType, Boolean throwOnError) at System.Configuration.MgmtConfigurationRecord.CreateSectionGroupFactory (FactoryRecord factoryRecord) at System.Configuration.MgmtConfigurationRecord.EnsureSectionGroupFactory (FactoryRecord factoryRecord) at System.Configuration.MgmtConfigurationRecord.GetSectionGroup (String configKey) at System.Configuration.ConfigurationSectionGroupCollection.Get (String name) at System.Configuration.ConfigurationSectionGroupCollection. <GetEnumerator>d__0.MoveNext() at System.Configuration.Configuration.ForceGroupsRecursive (ConfigurationSectionGroup group) at System.Configuration.Configuration.SaveAsImpl (String filename, ConfigurationSaveMode saveMode, Boolean forceSaveAll) at System.Configuration.Configuration.Save (ConfigurationSaveMode saveMode, Boolean forceSaveAll) at MyCompany.MyProduct.blah.blah.Save () in C:\\projects\\blah\\blah\\blah.cs:line blah For reference, the configuration goes like so <configuration> <configSections> ... normal sections here that work just fine, followed by ... <sectionGroup name="threadSettings" type="System.Configuration.NameValueSectionHandler"> <section name="T1" type="System.Configuration.DictionarySectionHandler"/> <section name="T2" type="System.Configuration.DictionarySectionHandler"/> <section name="T3" type="System.Configuration.DictionarySectionHandler"/> </sectionGroup> </configSections> <applicationSettings /> <threadSettings> <T1> <add key="key-here" value="value-here"/> </T1> ... T2 and T3 here as well, thats not interesting ... </threadSettings> </configuration> If I remove the actual sections and the definition for the custom section groups, I am able to read / write settings just fine and even call Save just fine. Obviously the custom section groups could be setup a little nicer but I can't fix that so I'm wondering: Is it possible to get the Configuration object to save if it has custom sectionGroups of the NameValueSectionHandler type

    Read the article

  • Performing dynamic sorts on EF4 data

    - by Jaxidian
    I'm attempting to perform dynamic sorting of data that I'm putting into grids into our MVC UI. Since MVC is abstracted from everything else via WCF, I've created a couple utility classes and extensions to help with this. The two most important things (slightly simplified) are as follows: public static IQueryable<T> ApplySortOptions<T, TModel, TProperty>(this IQueryable<T> collection, IEnumerable<ISortOption<TModel, TProperty>> sortOptions) where TModel : class { var sortedSortOptions = (from o in sortOptions orderby o.Priority ascending select o).ToList(); var results = collection; foreach (var option in sortedSortOptions) { var currentOption = option; var propertyName = currentOption.Property.MemberWithoutInstance(); var isAscending = currentOption.IsAscending; if (isAscending) { results = from r in results orderby propertyName ascending select r; } else { results = from r in results orderby propertyName descending select r; } } return results; } public interface ISortOption<TModel, TProperty> where TModel : class { Expression<Func<TModel, TProperty>> Property { get; set; } bool IsAscending { get; set; } int Priority { get; set; } } I've not given you the implementation for MemberWithoutInstance() but just trust me in that it returns the name of the property as a string. :-) Following is an example of how I would consume this (using a non-interesting, basic implementation of ISortOption<TModel, TProperty>): var query = from b in CurrentContext.Businesses select b; var sortOptions = new List<ISortOption<Business, object>> { new SortOption<Business, object> { Property = (x => x.Name), IsAscending = true, Priority = 0 } }; var results = query.ApplySortOptions(sortOptions); As I discovered with this question, the problem is specific to my orderby propertyName ascending and orderby propertyName descending lines (everything else works great as far as I can tell). How can I do this in a dynamic/generic way that works properly?

    Read the article

  • heroku time zone problem, logging local server time

    - by Ole Morten Amundsen
    UPDATE: Ok, I didn't formulate a good Q to be answered. I still struggle with heroku being on -07:00 UTC and I at +02:200 UTC. Q: How do I get the log written in the correct Time.zone ? The 9 hours difference, heroku (us west) - norway, is distracting to work with. I get this in my production.log (using heroku logs): Processing ProductionController#create to xml (for 81.26.51.35 at 2010-04-28 23:00:12) [POST] How do I get it to write 2010-04-29 08:00:12 +02:00 GMT ? Note that I'm running at heroku and cannot set the server time myself, as one could do at your amazon EC2 servers. Below is my previous question, I'll leave it be as it holds some interesting information about time and zones. Why does Time.now yield the server local time when I have set the another time zone in my environment.rb config.time_zone = 'Copenhagen' I've put this in a view <p> Time.zone <%= Time.zone %> </p> <p> Time.now <%= Time.now %> </p> <p> Time.now.utc <%= Time.now.utc %> </p> <p> Time.zone.now <%= Time.zone.now %> </p> <p> Time.zone.today <%= Time.zone.today %> </p> rendering this result on my app at heroku Time.zone (GMT+01:00) Copenhagen Time.now Mon Apr 26 08:28:21 -0700 2010 Time.now.utc Mon Apr 26 15:28:21 UTC 2010 Time.zone.now 2010-04-26 17:28:21 +0200 Time.zone.today 2010-04-26 Time.zone.now yields the correct result. Do I have to switch from Time.now to Time.zone.now, everywhere? Seems cumbersome. I truly don't care what the local time of the server is, it's giving me loads of trouble due to extensive use of Time.now. Am I misunderstanding anything fundamental here?

    Read the article

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

  • Resultant of a polynomial with x^n–1

    - by devin.omalley
    Resultant of a polynomial with x^n–1 (mod p) I am implementing the NTRUSign algorithm as described in http://grouper.ieee.org/groups/1363/lattPK/submissions/EESS1v2.pdf , section 2.2.7.1 which involves computing the resultant of a polynomial. I keep getting a zero vector for the resultant which is obviously incorrect. private static CompResResult compResMod(IntegerPolynomial f, int p) { int N = f.coeffs.length; IntegerPolynomial a = new IntegerPolynomial(N); a.coeffs[0] = -1; a.coeffs[N-1] = 1; IntegerPolynomial b = new IntegerPolynomial(f.coeffs); IntegerPolynomial v1 = new IntegerPolynomial(N); IntegerPolynomial v2 = new IntegerPolynomial(N); v2.coeffs[0] = 1; int da = a.degree(); int db = b.degree(); int ta = da; int c = 0; int r = 1; while (db > 0) { c = invert(b.coeffs[db], p); c = (c * a.coeffs[da]) % p; IntegerPolynomial cb = b.clone(); cb.mult(c); cb.shift(da - db); a.sub(cb, p); IntegerPolynomial v2c = v2.clone(); v2c.mult(c); v2c.shift(da - db); v1.sub(v2c, p); if (a.degree() < db) { r *= (int)Math.pow(b.coeffs[db], ta-a.degree()); r %= p; if (ta%2==1 && db%2==1) r = (-r) % p; IntegerPolynomial temp = a; a = b; b = temp; temp = v1; v1 = v2; v2 = temp; ta = db; } da = a.degree(); db = b.degree(); } r *= (int)Math.pow(b.coeffs[0], da); r %= p; c = invert(b.coeffs[0], p); v2.mult(c); v2.mult(r); v2.mod(p); return new CompResResult(v2, r); } There is pseudocode in http://www.crypto.rub.de/imperia/md/content/texte/theses/da_driessen.pdf which looks very similar. Why is my code not working? Are there any intermediate results I can check? I am not posting the IntegerPolynomial code because it isn't too interesting and I have unit tests for it that pass. CompResResult is just a simple "Java struct".

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • LSP packet modify

    - by kellogs
    Hello, anybody care to share some insights on how to use LSP for packet modifying ? I am using the non IFS subtype and I can see how (pseudo?) packets first enter WSPRecv. But how do I modify them ? My inquiry is about one single HTTP response that causes WSPRecv to be called 3 times :((. I need to modify several parts of this response, but since it comes in 3 slices, it is pretty hard to modify it accordingly. And, maybe on other machines or under different conditions (such as high traffic) there would only be one sole WSPRecv call, or maybe 10 calls. What is the best way to work arround this (please no NDIS :D), and how to properly change the buffer (lpBuffers-buf) by increasing it ? int WSPAPI WSPRecv( SOCKET s, LPWSABUF lpBuffers, DWORD dwBufferCount, LPDWORD lpNumberOfBytesRecvd, LPDWORD lpFlags, LPWSAOVERLAPPED lpOverlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine, LPWSATHREADID lpThreadId, LPINT lpErrno ) { LPWSAOVERLAPPEDPLUS ProviderOverlapped = NULL; SOCK_INFO *SocketContext = NULL; int ret = SOCKET_ERROR; *lpErrno = NO_ERROR; // // Find our provider socket corresponding to this one // SocketContext = FindAndRefSocketContext(s, lpErrno); if ( NULL == SocketContext ) { dbgprint( "WSPRecv: FindAndRefSocketContext failed!" ); goto cleanup; } // // Check for overlapped I/O // if ( NULL != lpOverlapped ) { /*bla bla .. not interesting in my case*/ } else { ASSERT( SocketContext->Provider->NextProcTable.lpWSPRecv ); SetBlockingProvider(SocketContext->Provider); ret = SocketContext->Provider->NextProcTable.lpWSPRecv( SocketContext->ProviderSocket, lpBuffers, dwBufferCount, lpNumberOfBytesRecvd, lpFlags, lpOverlapped, lpCompletionRoutine, lpThreadId, lpErrno); SetBlockingProvider(NULL); //is this the place to modify packet length and contents ? if (strstr(lpBuffers->buf, "var mapObj = null;")) { int nLen = strlen(lpBuffers->buf) + 200; /*CHAR *szNewBuf = new CHAR[]; CHAR *pIndex; pIndex = strstr(lpBuffers->buf, "var mapObj = null;"); nLen = strlen(strncpy(szNewBuf, lpBuffers->buf, (pIndex - lpBuffers->buf) * sizeof (CHAR))); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), "var com = null;\r\n", 17 * sizeof(CHAR))); pIndex += 18 * sizeof(CHAR); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), pIndex, 1330 * sizeof (CHAR))); nLen = strlen(strncpy(szNewBuf + nLen * sizeof(CHAR), "if (com == null)\r\n" \ "com = new ActiveXObject(\"InterCommJS.Gateway\");\r\n" \ "com.lat = latitude;\r\n" \ "com.lon = longitude;\r\n}", 111 * sizeof (CHAR))); pIndex = strstr(szNewBuf, "Content-Length:"); pIndex += 16 * sizeof(CHAR); strncpy(pIndex, "1465", 4 * sizeof(CHAR)); lpBuffers->buf = szNewBuf; lpBuffers->len += 128;*/ } if ( SOCKET_ERROR != ret ) { SocketContext->BytesRecv += *lpNumberOfBytesRecvd; } } cleanup: if ( NULL != SocketContext ) DerefSocketContext( SocketContext, lpErrno ); return ret; } Thank you

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • org.hibernate.TransientObjectException during Criteria.list()

    - by rancidfishbreath
    I have seen posts all over the internet that talk about how to fix the TransientObjectExceptions during save/update/delete but I am having this problem when calling list on my Criteria. I have two objects A and B. A has a field named b which is of type B. In my mapping b is mapped as a many-to-one. This all runs in a larger persistence framework (the framework is kind of like Core Data) and so I don't use any cascades in my hibernate mappings since cascades are handled at a higher level. This is the interesting code surrounding my criteria: A a = new A(); B b = new B(); a.setB(b); session.save("B", b); // Actually handled by the higher level session.save("A", a); // framework, this is just for clarity // transaction committed and session closed ... // new session opened Criteria criteria = session.createCriteria(A.class); criteria.add(Restrictions.eq("b", b)); List<?> objects = criteria.list(); Basically I am looking for all objects of type A such that A.b equals a particular instance of b (I actually tried restructuring a query so that I was passing in the id of b just to make sure that b wasn't causing me problems). Here is the stack trace that occurs when I call criteria.list(): org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: B at org.hibernate.engine.ForeignKeys.getEntityIdentifierIfNotUnsaved(ForeignKeys.java:244) at org.hibernate.type.EntityType.getIdentifier(EntityType.java:449) at org.hibernate.type.ManyToOneType.nullSafeSet(ManyToOneType.java:141) at org.hibernate.loader.Loader.bindPositionalParameters(Loader.java:1769) at org.hibernate.loader.Loader.bindParameterValues(Loader.java:1740) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1612) at org.hibernate.loader.Loader.doQuery(Loader.java:717) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:270) at org.hibernate.loader.Loader.doList(Loader.java:2294) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2172) at org.hibernate.loader.Loader.list(Loader.java:2167) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:119) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1706) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:347) Here is my mapping: <class entity-name="A" lazy="false"> <tuplizer entity-mode="dynamic-map" class="MyTuplizer" /> <id type="long" column="id"> <generator class="native" /> </id> <many-to-one name="b" entity-name="B" column="b_id" lazy="false" /> </class> <class entity-name="B" lazy="false"> <tuplizer entity-mode="dynamic-map" class="MyTuplizer" /> <id type="long" column="id"> <generator class="native" /> </id> </class> Can anyone help me figure out why I would be getting a TransientObjectException during a fetch? Preferably I would like to find a solution that does not rely on cascades since they tend to mask problems that occur in the higher level framework.

    Read the article

  • Am I "wasting" my time learning C and other low level stuff ?

    - by Andreas Grech
    I have just recently started learning C and the reason I did that was because frankly, I consider myself to be of a "less-developer" than the people who know and work with C. Thus I planned to start learning ASM, C, C++ and bought the K&R book and started pushing myself to learn the C Programming Language and up till now I'm doing great...learning about arrays the low level way (ie the pointer + offset thing), pointers and all that and obviously asking questions on stackoverflow for guidance. My problem is that sometimes I get thinking if instead of learning this low level stuff, maybe I should maybe spend more time learning newer, more widely used technologies...basically, more web stuff. Now I am well versed with both C# and ASP.Net and currently that's what I do for a living, but still there exists Microsoft technologies that I haven't quite touched upon...such as ASP.Net MVC, The Entity Framework etc... And those are only Microsoft Technologies...obviously there are other stuff that I would like to touch upon...stuff like Ruby, which would lead me to Ruby on Rails, or Python for Django or even Java and J2EE, or maybe even PHP; ie, basically mainly Web Stuff. Mind you, I did touch upon some of the stuff I mentioned earlier on, such as PHP and Java but I am still not quite versed in them as I am in C# and ASP.Net...but still, I think that by learning other languages that are used in the web environment will broaden my horizons...both as a developer who loves learning, and also Career wise. My point is, am I really using up my time correctly by learning older, lower level stuff? Stuff that for my current line of work, will most probably never use, but still is interesting to know ? To be frankly honest, I am also learning C so that I could, maybe someday, get into Electronics and Micro-controller programming but that is a whole new world for me and, if I choose to go there, will take some time to get adjusted to. And even then, I don't know if I can get a career in working in that line of work. ...but I still wonder about this question over and over...Am I doing the right thing by learning C instead of something (Web-stuff) that will most probably be more useful for me career-wise? I'm sorry for such asking such a long and most probably a boring question, but I feel as if this is the only place where I can ask such a question and get an honest answer from experts in the field. Thank you for your time.

    Read the article

  • who free's setvbuf buffer?

    - by Evan Teran
    So I've been digging into how the stdio portion of libc is implemented and I've come across another question. Looking at man setvbuf I see the following: When the first I/O operation occurs on a file, malloc(3) is called, and a buffer is obtained. This makes sense, your program should have a malloc in it for I/O unless you actually use it. My gut reaction to this is that libc will clean up its own mess here. Which I can only assume it does because valgrind reports no memory leaks (they could of course do something dirty and not allocate it via malloc directly... but we'll assume that it literally uses malloc for now). But, you can specify your own buffer too... int main() { char *p = malloc(100); setvbuf(stdio, p, _IOFBF, 100); puts("hello world"); } Oh no, memory leak! valgrind confirms it. So it seems that whenever stdio allocates a buffer on its own, it will get deleted automatically (at the latest on program exit, but perhaps on stream close). But if you specify the buffer explicitly, then you must clean it up yourself. There is a catch though. The man page also says this: You must make sure that the space that buf points to still exists by the time stream is closed, which also happens at program termination. For example, the following is invalid: Now this is getting interesting for the standard streams. How would one properly clean up a manually allocated buffer for them, since they are closed in program termination? I could imagine a "clean this up when I close flag" inside the file struct, but it get hairy because if I read this right doing something like this: setvbuf(stdio, 0, _IOFBF, 100); printf("hello "); setvbuf(stdio, 0, _IOLBF, 100); printf("world\n"); would cause 2 allocations by the standard library because of this sentence: If the argument buf is NULL, only the mode is affected; a new buffer will be allocated on the next read or write operation.

    Read the article

  • How do I reward my developers for the little things they get right?

    - by Nat
    I am in a tech lead role and my developers get stuff right most of the time. How do I communicate to them thier value to me? (I.e. they have value because I do not have to go through and point out mistakes which means I do not have to watch them like a hawk which frees me to do more useful things). In summary For doing the mundane well on a day to day basis, it is good to recognise the developers effort verbally to them. An honest thankyou that mentions the specific behaviour and its positive repercussions to you personally will be well received, adjust the language to suite each individual. (Note that other developers within earshot may also respond to this by increasing their efforts in this specific activity.) Other things that should be done regularly are: Team drinks In many cultures this is an entirely worthy way of giving the team some time to socialise and relax. Be sure that you do not exclude people who do not drink or are not keen on pub culture. Shared meals are another option. Formal written (email) acknowledgment and praise to senior managers of the teams efforts and successes. (Note that acknowledging individuals alone may damage team spirit) Work the hours you expect your team to do. If they absolutely must work late for a deadline, be there in support Go to bat for the team. Refuse to let them be forced to work long periods of overtime without compensation. Protect them from level politics and stress. Give your team the best equipment you can afford. Good tools show respect and improve productivity. Small or large team rewards where appropriate can consist of many interesting activities/ items. If it allows the team to get together in a fun and even lightly competitive manner it will work (foosball table, go-karting, darts board, video game console etc). Don’t forget to listen to what the team wants, each team will have different ideas. Ensure they are getting a fair deal financially from the company. While different people may have different expectations of their pay, someone being paid unfairly will rot morale for the entire team

    Read the article

  • Crystal reports .net visual studio 2008 bundled edition

    - by DeveloperChris
    I have a serious issue with crystal reports. when run in my development environment or debugged on my local machine it works fine. but when the application is published to a windows server 2003 it has the dreaded "The report you requested requires further information" Message I have had no luck trying to get rid of this message Anybody know what I can try? DC Here is a bunch more info. I use a placeholder in the aspx page and then set the user/password and database in the codebehind I could not get it to work with a dataset and found that I had to assign odbc connection in the cr designer. and then in the code behind change the above details as required. This is done because the same report can get the data from 3 different databases (live development and training) protected override void Page_Load(object sender, EventArgs e) { base.Page_Load(sender, e); CrystalReportSource1.ReportDocument.Load(Server.MapPath(@"~/Reports/Report5asp.rpt")); CrystalReportViewer1.ReportSource = ConfigureCrystalReports(CrystalReportSource1.ReportDocument,CrystalReportViewer1); // parameters CrystalReportViewer1.ParameterFieldInfo.Clear(); AddParameter("DIid", _app.Data["DIid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("EEid", _app.Data["EEid"], CrystalReportViewer1.ParameterFieldInfo); AddParameter("CTid", _app.Data["CTid"], CrystalReportViewer1.ParameterFieldInfo); } public ReportDocument ConfigureCrystalReports(ReportDocument report, CrystalReportViewer viewer) { String _connectionString = _app.ConnectionString(); String dsn = _app.DSN(); SqlConnectionStringBuilder SConn = new SqlConnectionStringBuilder(_connectionString); TableLogOnInfos crtableLogoninfos = new TableLogOnInfos(); TableLogOnInfo crtableLogoninfo = new TableLogOnInfo(); ConnectionInfo crConnectionInfo = new ConnectionInfo(); crConnectionInfo.ServerName = dsn;// SConn.DataSource; crConnectionInfo.DatabaseName = SConn.InitialCatalog; crConnectionInfo.UserID = SConn.UserID; crConnectionInfo.Password = SConn.Password; crConnectionInfo.Type = ConnectionInfoType.SQL; crConnectionInfo.IntegratedSecurity = false; foreach (CrystalDecisions.CrystalReports.Engine.Table CrTable in report.Database.Tables) { crtableLogoninfo = CrTable.LogOnInfo; crtableLogoninfo.ConnectionInfo = crConnectionInfo; CrTable.ApplyLogOnInfo(crtableLogoninfo); } return report; } As stated this works fine on my XP machine used for development when deployed on winserver 2003 I get the error DC Some interesting additional information I moved the development to my home machine so I could work on the problem this weekend. So now I am developing debugging and testing on the same machine! In VS2008 I can edit and preview the reports with no problems If I fire up the debugger I can view the reports in the browser with no problems But if I publish the website to another folder on the same machine and fire up IIS and try to browse to a report I get the aforementioned error. All else works as expected. IIS runs under different permissions than VS2008 so perhaps its something to do with that, but I have tried lots of different permissions and cannot get it to run. DC

    Read the article

  • Why is my javascript function sometimes "not defined"?

    - by harpo
    Problem: I call my javascript function, and sometimes I get the error 'myFunction is not defined'. But it is defined. For example. I'll occasionally get 'copyArray is not defined' even in this example: function copyArray( pa ) { var la = []; for (var i=0; i < pa.length; i++) la.push( pa[i] ); return la; } Function.prototype.bind = function( po ) { var __method = this; var __args = []; // sometimes errors -- in practice I inline the function as a workaround __args = copyArray( arguments ); return function() { /* bind logic omitted for brevity */ } } As you can see, copyArray is defined right there, so this can't be about the order in which script files load. I've been getting this in situations that are harder to work around, where the calling function is located in another file that should be loaded after the called function. But this was the simplest case I could present, and appears to be the same problem. It doesn't happen 100% of the time, so I do suspect some kind of load-timing-related problem. But I have no idea what. @Hojou: That's part of the problem. The function in which I'm now getting this error is itself my addLoadEvent, which is basically a standard version of the common library function. @James: I understand that, and there is no syntax error in the function. When that is the case, the syntax error is reported as well. In this case, I am getting only the 'not defined' error. @David: The script in this case resides in an external file that is referenced using the normal <script src="file.js"></script> method in the page's head section. @Douglas: Interesting idea, but if this were the case, how could we ever call a user-defined function with confidence? In any event, I tried this and it didn't work. @sk: This technique has been tested across browsers and is basically copied from the prototype library.

    Read the article

  • Potential issues using member's "from" address and the "sender" header

    - by Paul Burney
    Hi all, A major component of our application sends email to members on behalf of other members. Currently we set the "From" address to our system address and use a "Reply-to" header with the member's address. The issue is that replies from some email clients (and auto-replies/bounces) don't respect the "Reply-to" header so get sent to our system address, effectively sending them to a black hole. We're considering setting the "From" address to our member's address, and the "Sender" address to our system address. It appears this way would pass SPF and Sender-ID checks. Are there any reasons not to switch to this method? Are there any other potential issues? Thanks in advance, -Paul Here are way more details than you probably need: When the application was first developed, we just changed the "from" address to be that of the sending member as that was the common practice at the time (this was many years ago). We later changed that to have the "from" address be the member's name and our address, i.e., From: "Mary Smith" <[email protected]> With a "reply-to" header set to the member's address: Reply-To: "Mary Smith" <[email protected]> This helped with messages being mis-categorized as spam. As SPF became more popular, we added an additional header that would work in conjunction with our SPF records: Sender: <[email protected]> Things work OK, but it turns out that, in practice, some email clients and most MTA's don't respect the "Reply-To" header. Because of this, many members send messages to [email protected] instead of the desired member. So, I started envisioning various schemes to add data about the sender to the email headers or encode it in the "from" email address so that we could process the response and redirect appropriately. For example, From: "Mary Smith" <[email protected]> where the string after "messages" is a hash representing Mary Smith's member in our system. Of course, that path could lead to a lot of pain as we need to develop MTA functionality for our system address. I was looking again at the SPF documentation and found this page interesting: http://www.openspf.org/Best_Practices/Webgenerated They show two examples, that of evite.com and that of egreetings.com. Basically, evite.com is doing it the way we're doing it. The egreetings.com example uses the member's from address with an added "Sender" header. So the question is, are there any potential issues with using the egreetings method of the member's from address with a sender header? That would eliminate the replies that bad clients send to the system address. I don't believe that it solves the bounce/vacation/whitelist issue since those often send to the MAIL FROM even if Return Path is specified.

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

  • Need help with Xpath methods in javascript (selectSingleNode, selectNodes)

    - by Andrija
    I want to optimize my javascript but I ran into a bit of trouble. I'm using XSLT transformation and the basic idea is to get a part of the XML and subquery it so the calls are faster and less expensive. This is a part of the XML: <suite> <table id="spis" runat="client"> <rows> <row id="spis_1"> <dispatch>'2008', '288627'</dispatch> <data col="urGod"> <title>2008</title> <description>Ur. god.</description> </data> <data col="rbr"> <title>288627</title> <description>Rbr.</description> </data> ... </rows> </table> </suite> In the page, this is the javascript that works with this: // this is my global variable for getting the elements so I just get the most // I can in one call elemCollection = iDom3.Table.all["spis"].XML.DOM.selectNodes("/suite/table/rows/row").context; //then I have the method that uses this by getting the subresults from elemCollection //rest of the method isn't interesting, only the selectNodes call _buildResults = function (){ var _RowList = elemCollection.selectNodes("/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... //this variant works elemCollection = iDom3.Table.all["spis"].XML.DOM _buildResults = function (){ var _RowList = elemCollection.selectNodes("/suite/table/rows/row/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... The problem is, I can't find a way to use the subresults to get what I need.

    Read the article

  • Window Media Player issues two requests for the audio on web page

    - by Ron Harlev
    I'm using Windows Media Player in a web page. I have version 11 installed so that is the version I'm testing with right now. The player is embedded on the page with this HTML: <OBJECT id='MS_mediaPlayer' width="400" height="45" classid='CLSID:6BF52A52-394A-11D3-B153-00C04F79FAA6' codebase='http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701' standby='Loading Microsoft Windows Media Player components...' type='application/x-oleobject'> <param name='autoStart' value="false"> <param name='uiMode' value="invisible"> <param name='loop' value="false"> </OBJECT> I'm calling in JavaScript: MS_mediaPlayer.URL = "SomeAudioFile.mp3" MS_mediaPlayer.controls.play(); When I look at Fiddler I can see that the player actually downloads "SomeAudioFile.mp3" twice. Is there some setting I have wrong? I was trying to set the "autoPlay" to true and avoid calling "play()". Got the same result - two downloads. UPDATE: The first request's user-agent is "Windows-Media-Player/11.0.5721.5268". The second has "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)". Looks like the browser is running the same request the second time. No Idea why Any ideas? UPDATE (4/1/10): Still no solution. I debugged the JS thoroughly and there is only one call to MediaPlayer.URL='.....' to set the audio file. Nothing else triggers the media player to load the file and there is no other place referencing the audio file on the page. One other interesting fact is that this doesn't happen (the double loading of the audio) when I run the browser locally on my development web server. But other remote requests to the same web server generate the double audio loading. I believe I eliminated any correlation with specific IE version or media player version. This happens with IE6-8 and WM9-12

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • How do you attract programmers in rural areas?

    - by Reed Copsey
    I run a software development group for a very small, but stable and established company in a small town, somewhat outside of the "big city". Unfortunately, the "programmer" labor pool is much smaller due to the size of the city. There are many positives to working in this area, especially in terms of quality of life (particularly for people interested in outdoor activities), lower cost of living, great schools and neighborhoods, etc. However, I've always had difficulty attracting high-qualtiy, experienced developers. For those of you who hire developers outside of large cities: Where do you advertise to find good developers? Many of the large sites are very focused in certain metropolitan areas, and seem inappropriate places to advertise if you're outside of that main region. How do you attract quality developers to rural (or at least less metropolitan) locations? Do you find that you make more sacrifices in your hiring due to a smaller labor pool? Or do you just wait, and take extra time to attract people? What sacrifices do you expect to make if you are outside of the main developer-rich cities? For all of the developers out there... What would entice you to working in a smaller town? Are there things that would stand out and make you willing to relocate or at least apply to a position that was not nearby? What specific qualities would help you want to move outside of the city? In the past, I've had difficulty with finding good people. Most of the people who've applied and been willing to move out to a more rural location seem like the types that can't keep a quality job elsewhere. I'd like to know what advice people have to attracting quality technical staff. I don't believe its the work itself that's been the problem - The work is both interesting and challenging, and nearly 100% new development. The developers I have seem very happy with their situation - they love the work, the atmosphere, etc. It's more a matter of finding willing, able developers. Edit: More info after the first couple of answers: Right now, some of my best developers telecommute (some work from overseas); however, for this question, I'm trying to figure out how to get people who want to live and work full time locally. I need some people with whom I interact every day.

    Read the article

  • ImageMagick: convert png fail via PHP and works via bash shell

    - by wedix
    I've got a very weird bug which I've yet to find a solution. UPDATE see solution below What I am trying to do is convert a full size picture into a 160x120 thumbnail. It works great with jpg and jpeg files of any size, but not with png. ImageMagick command: /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' PHP function (shortened) ... $cmd = "/opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png'"; exec($cmd, $output, $retval); $errors += $retval; if ($errors > 0) { die(print_r($output)); } When this function runs $retval equal 1 which means the convert command failed (thumbnail isn't created). This is where it gets interesting, if I run the exact same command in my shell, it works. wedbook:~ wedix$ /opt/local/bin/convert '/WEBSERVER/images/img_0003-192-10.png' -thumbnail x320 -resize '320x<' -resize 50% -gravity center -crop 160x120+0+0 +repage -quality 91 '/WEBSERVER/thumbs/small_img_0003-192-10.png' wedbook:~ wedix$ I've tried using different PHP function such as system, passthru but it didn't work. I thought maybe someone here knew the solution. I'm using MAMP 1.7.2 Apache/2.0.59 PHP/5.2.6 Thanks! UPDATE I updated the following dependencies libpng from 1.2.35 to 1.2.37 libiconv from 1.12_2 to 1.13_0 ImageMagick 6.5.2-4_1 to 6.5.2-9_0 However, it did not fix my problem. 2nd UPDATE I finally found something that might help, when the function runs this is what gets printed in the Apache logs: dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/bin/convert Reason: Incompatible library version: convert requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 3rd UPDATE libiconv.2.dylib is version 8.0.0... bash-3.2$ otool -L /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.4) 4th UPDATE Problem was related to MAMP, see solution below

    Read the article

  • How effective are technical test(s) and is it necessary?

    - by The Elite Gentleman
    Hi everyone, I recently took a Java technical test (for a company who wanted are looking for senior java developers) and, funny enough, I only realised the technical terms of what I've been doing all along after I've written the test. I'm not too IT jargon when it comes to development but I can pretty much code and create solutions unaware that I'm using design pattern (or the specifics of that design pattern) or technology. I learned things such as JMS, Frameworks, etc. while programming at home and having to google stuff online to problems I have. Others e.g. IoC, Surrogates in Databases, etc., I have used extensively without knowing that it had a name for it. Do you think that these technical test are effective and why? What interesting questions did you find that boggled your brains out while the clock kept ticking? Seeing that IT is vastly evolving at a rapid rate, do we have to constantly be updated with new terms that comes out? Some questions I was asked : What object oriented principle is violated by this architectural mechanism for dot notation? Is indexing tables effective for range query or point query search? What is ThreadLocal and what is it used for? Method overloading vs Method overriding. What is the difference between the 2? What is dynamic binding? Now, imagine my poor head trying to understand these jargons (considering I use it almost everyday) PS The question was not a programming question, where you have a problem and write code to solve it. Rather, a thinking type question and you write answers (against the clock). Update I clearly didn't come out clearly as I should have. There are those that are technically "book smart" but with very little hands-on experience and vice versa. So, the question (in connection to what I've asked) is that are these technical test seeking "book smart" people or people with lots of hands-on experience (some who are not that well clued up with too much book-smart jargons). How effective is it then, for companies to look for developers if most of the questions are too terminology-centric? (if that's the correct term, :))

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • Explanation of converting exporting an XML document as a relational database using XSLT

    - by Yaaqov
    I would like to better understand the basic steps needed to a take an XML document like this Breakfast Menu... <?xml version="1.0" encoding="ISO-8859-1"?> <breakfast_menu> <food> <name>Belgian Waffles</name> <price>$5.95</price> <description>two of our famous Belgian Waffles with plenty of real maple syrup</description> <calories>650</calories> </food> <food> <name>Strawberry Belgian Waffles</name> <price>$7.95</price> <description>light Belgian waffles covered with strawberries and whipped cream</description> <calories>900</calories> </food> <food> <name>Berry-Berry Belgian Waffles</name> <price>$8.95</price> <description>light Belgian waffles covered with an assortment of fresh berries and whipped cream</description> <calories>900</calories> </food> <food> <name>French Toast</name> <price>$4.50</price> <description>thick slices made from our homemade sourdough bread</description> <calories>600</calories> </food> <food> <name>Homestyle Breakfast</name> <price>$6.95</price> <description>two eggs, bacon or sausage, toast, and our ever-popular hash browns</description> <calories>950</calories> </food> </breakfast_menu> And "export" it to say, an Access or MySQL database using XSLT, creating two joined tables: Table: breakfast_menu Field: menu_item_id Field: food_id Table: food Field: food_id Field: name Field: price Field: description Field: calories If there are online tutorials on this that you know of, I'd be interesting in learning more, as well. Thanks.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >