Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 377/457 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • .Net System.Mail.Message adding multiple "To" addresses

    - by Matt Dawdy
    I just hit something I think is inconsistent, and wanted to see if I'm doing something wrong, if I'm an idiot, or... MailMessage msg = new MailMessage(); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); Really only sends this email to 1 person, the last one. To add multiples I have to do this: msg.To.Add("[email protected],[email protected],[email protected],[email protected]"); I don't get it. I thought I was adding multiple people to the "To" address collection, but what I was doing was replacing it. I think I just realized my error -- to add one item to the collection, use .To.Add(new MailAddress("[email protected]")) If you use just a string, it replaces everything it had in its list. Ugh. I'd consider this a rather large gotcha! Since I answered my own question, but I think this is of value to have in the stackoverflow archive, I'll still ask it. Maybe someone even has an idea of other traps that you can fall into.

    Read the article

  • Nested namespaces, correct static library design issues

    - by PeterK
    Hello all, I'm currently in the process of developing a fairly large static library which will be used by some tools when it's finished. Now since this project is somewhat larger than anything i've been involved in so far, I realized its time to think of a good structure for the project. Using namespaces is one of those logical steps. My current approach is to divide the library into parts (which are not standalone, but their purpose calls for such a separation). I have a 'core' part which now just holds some very common typedefs and constants (used by many different parts of the library). Other parts are for example some 'utils' (hash etc.), file i/o and so on. Each of these parts has its own namespace. I have nearly finished the 'utils' part and realized that my approach probably is not the best. The problem (if we want to call it so) is that in the 'utils' namespace i need something from the 'core' namespace which results in including the core header files and many using directives. So i began to think that this probably is not a good thing and should be changed somehow. My first idea is to use nested namespaces as to have something like core::utils. Since this will require some heavy refactoring i want to ask here first. What do you think? How would you handle this? Or more generally: How to correctly design a static library in terms of namespaces and code organization? If there are some guidelines or articles about it, please mentoin them too. Thanks. Note: i'm quite sure that there are more good approaches than just one. Feel free to post your ideas, suggestions etc. Since i'm designing this library i want it to be really good. The goal is to make it as clean and FAST as possible. The only problem is that i will have to integrate a LOT of existing code and refactor it, which will really be a painful process (sigh) - thats why good structure is so important)

    Read the article

  • How to contain the Deepwater Horizon oil spill? [closed]

    - by Yarin
    This is obviously not programming, but it's important and we're smart people, so let's give it a shot. (BP has actually begun soliciting suggestions for how to deal with the crisis http://www.deepwaterhorizonresponse.com/go/doc/2931/546759/, confirming that they don't have a clue) I'll start with my own proposal... Anchored Chute: A large-diameter, collapsible, flexible tube/hose with a wide mouth on one end is anchored over the leak. There's no need for a hermetic seal, the opening just needs to be big enough to form a canopy over the leak area. The rest of the tubing can just be dumped on the sea floor. Since oil is denser than water, the oily water that flows into the mouth eventually inflates the tube and raises the opposite end to the surface, where it can be collected (Like those inflatable dancing air socks at car dealerships). Further buoyancy could be added with floats attached to the tube at intervals. I think this method would not be as susceptible to the problems BP had with the containment dome, where a rigid, metal casing froze up with crystallized hydrates, as we would not be trying to contain the full pressure of the well, but would be using the natural buoyancy of the oil to channel its flow, and with a much larger opening.

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • JSP: How can I still get the code on my error page to run, even if I can't display it?

    - by Josh Hinman
    I've defined an error-page in my web.xml: <error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page> In that error page, I have a custom tag that I created. The tag handler for this tag e-mails me the stacktrace of whatever error occurred. For the most part this works great. Where it doesn't work great is if the output has already begun being sent to the client at the time the error occurs. In that case, we get this: SEVERE: Exception Processing ErrorPage[exceptionType=java.lang.Exception, location=/error.jsp] java.lang.IllegalStateException I believe this error happens because we can't redirect a request to the error page after output has already started. The work-around I've used is to increase the buffer size on particularly large JSP pages. But I'm trying to write a generic error handler that I can apply to existing applications, and I'm not sure it's feasible to go through hundreds of JSP pages making sure their buffers are big enough. Is there a way to still allow my stack trace e-mail code to execute in this case, even if I can't actually display the error page to the client?

    Read the article

  • Using java.util.regex in Android apps - are there issues with this?

    - by johnrock
    In an Android app I have a utility class that I use to parse strings for 2 regEx's. I compile the 2 patterns in a static initializer so they only get compiled once, then activities can use the parsing methods statically. This works fine except that the first time the class is accessed and loaded, and the static initializer compiles the pattern, the UI hangs for close to a MINUTE while it compiles the pattern! After the first time, it flies on all subsequent calls to parseString(). My regEx that I am using is rather large - 847 characters, but in a normal java webapp this is lightning fast. I am testing this so far only in the emulator with a 1.5 AVD. Could this just be an emulator issue or is there some other reason that this pattern is taking so long to compile? private static final String exp1 = "(insertratherlong---847character--regexhere)"; private static Pattern regex1 = null; private static final String newLineAndTagsExp = "[<>\\s]"; private static Pattern regexNewLineAndTags = null; static { regex1 = Pattern.compile(exp1, Pattern.CASE_INSENSITIVE); regexNewLineAndTags = Pattern.compile(newLineAndTagsExp); } public static String parseString(CharSequence inputStr) { String replacementStr = "replaceMentText"; String resultString = "none"; try { Matcher regexMatcher = regex1.matcher(inputStr); try { resultString = regexMatcher.replaceAll(replacementStr); } catch (IllegalArgumentException ex) { } catch (IndexOutOfBoundsException ex) { } } catch (PatternSyntaxException ex) { } return resultString; }

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • Bizarre problem with WPF XAML file.

    - by paxdiablo
    I've just started a very simple WPF application which consists of a main large image and four smaller images. In order to assist with the layout, I created some JPEGs in MsPaint containing the images -2, -1, 0, +1 and +2 and just copied them into the top level of the project directory. The XAML segment contains, for the five images: <Image Grid.Column="1" Grid.Row="2" Grid.ColumnSpan="4" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicture" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/zero.jpg" <Image Grid.Column="1" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus2.jpg" <Image Grid.Column="2" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus1.jpg" <Image Grid.Column="3" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus1.jpg" <Image Grid.Column="4" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus2.jpg" When I try to set the source property for the plus2 image, it complains with a dialog box stating: Property value is not valid. Details | V The file plus2.jpg is not part of the project or its 'Build Action' property is not set to 'Resource'. Yet if I rename the file to plus3.jpg or plus2x.jpg, I don't have that problem. Why is it complaining about plus2.jpg specifically?

    Read the article

  • Mac vs. Windows Browser Font Height Rendering Issue

    - by cdmckay
    I'm using a custom font and the @font-face tag. In Windows, everything looks great, regardless of whether it's Firefox, Chrome, or IE. On Mac, it's a different story. For some reason, the Mac font renderer thinks the font is a lot shorter than it is. For example, consider this test code (live example here): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Webble</title> <style type="text/css"> @font-face { font-family: "Bubbleboy 2"; src: url("bubbleboy-2.ttf") format('truetype'); } body { font-family: "Bubbleboy 2"; font-size: 30px; } div { background-color: maroon; color: yellow; height: 100px; line-height: 100px; } </style> </head> <body> <div>The quick brown fox jumped over the lazy dog.</div> </body> </html> Open it on Windows Firefox and on Mac Firefox. Use your mouse to select it. On Windows, you'll notice it fully selects the font. On Mac, it only selects about half the font. If you look at what it is selecting, you'll see that that part has been centered, instead of the full height of the font. Is there anyway to fix this rather large discrepancy?

    Read the article

  • How does CDI injection work in MDBs and @Scheduled beans?

    - by Nils-Petter Nilsen
    I'm working on a large Java EE 6 application that is deployed on JBoss 6 Final. My current tasks involve using @Inject consistently instead of @EJB, but I'm running into some problems on some types of beans, specifically @MessageDriven beans and beans with @Scheduled methods. What happens is that if I'm unlucky with the timing (for @Schedule) or if there are messages in the MDBs' queues at startup, instantiation of the beans will fail because the injected resources (which are EJBs themselves) are not bound yet. Because I use @Inject, I'm guessing that the EJB container considers my beans to be ready, since the container itself does not care about @Inject; it probably simply assumes that since there are no @EJB injections, the beans are ready for use. The injected CDI proxies will then fail because the resources to inject aren't actually bound yet. Tiny example: @Stateless @LocalBean public class MySupportingBean { public void doSomething() { ... } } @Singleton public class MyScheduledBean { @Inject private MySupportingBean supportingBean; @Schedule(second = "*/1", hour = "*", minute = "*", persistent = false) public void onTimeout() { supportingBean.doSomething(); } } The above example will probably not fail often because there are only two beans, but the project I'm working on binds lots of EJBs, which will amplify the problem. But it might fail because there is no guarantee that MySupportingBean is bound first, and if onTimeout is invoked before MySupportingBean is bound, then instantiation of MyScheduledBean will fail. If I used @EJB instead, MyScheduledBean wouldn't be bound until the dependency to MySupportingBean was satisfied. Note that the example will not fail in onTimeout itself, but when CDI attempts to inject MySupportingBean. I've read a lot of posts on different forums where many people argue that @Inject is always better. Generally, I agree, but how do they handle @Schedule or @MessageDriven combined with @Inject? In my experience, it comes down to dumb luck whether the beans will work or not in those cases, and the beans will fail arbitrarily, depending on which order the EJBs are deployed in, and when @Schedule or onMessage are invoked.

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • Is there a modern free D?VCS that can ignore mainframe sequence numbers?

    - by Brent.Longborough
    I'm looking at migrating a large suite of IBM Assembler Language programs, from a vcs based on "filenames include version numbers", to a modern vcs which will give me, among other things, the ability to branch and merge. These files have 80-column records, the last 8 columns being an almost-meaningless sequence number. For a number of reasons which I don't really want to waste space by going into, I need the vcs to ignore (but hopefully preserve in some well-defined manner) the sequence number columns, and to diff and patch based only on the contents of the first 72 columns. Any ideas? Just to clarify "ignore but preserve": I accept it's a bit vague, as I haven't fully collected my ideas yet. It would be something along the lines of this: "When merging/patching, if one side has sequence numbers, output them; if more-than-one side has sequence numbers, use those present in file (1|2|3)" Why do I want to preserve sequence numbers? First, they really are sequence numbers. Second, I want to reintegrate this stuff back onto the mainframe, where sequence numbers can be terribly significant. (Those of you who know what "SMP/E" means will understand. Those who don't, be happy, but tremble...) I've just realised I hadn't accepted an answer. Difficult choice, but @Noldorin comes closest to where I have to go.

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • spring - constructor injection and overriding parent definition of nested bean

    - by mdma
    I've read the Spring 3 reference on inheriting bean definitions, but I'm confused about what is possible and not possible. For example, a bean that takes a collaborator bean, configured with the value 12 <bean name="beanService12" class="SomeSevice"> <constructor-arg index="0"> <bean name="beanBaseNested" class="SomeCollaborator"> <constructor-arg index="0" value="12"/> </bean> </constructor-arg> </bean> I'd then like to be able to create similar beans, with slightly different configured collaborators. Can I do something like <bean name="beanService13" parent="beanService12"> <constructor-arg index="0"> <bean> <constructor-arg index="0" value="13"/> </bean> </constructor> </bean> I'm not sure this is possible and, if it were, it feels a bit clunky. Is there a nicer way to override small parts of a large nested bean definition? It seems the child bean has to know quite a lot about the parent, e.g. constructor index. I'd prefer not to change the structure - the parent beans use collaborators to perform their function, but I can add properties and use property injection if that helps. This is a repeated pattern, would creating a custom schema help? Thanks for any advice!

    Read the article

  • How to replace custom IDs in the order of their appearance with a shell script?

    - by Péter Török
    I have a pair of rather large log files with very similar content, except that some identifiers are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • How do the size standard libraries compare for different languages

    - by Roman A. Taycher
    Someone was recently raving about how great jQuery was and how it made javascript into a pleasure and also how the whole source code was so small(and one file). I looked it up on www.ohloh.net/ and it said it was about 30,000 lines of javascript, when I tired curl piped to wc it said about 5000 lines(strange discrepancy that, maybe test suites, ect?). I thought well it isn't that strange since javascript from what I've heard has a lot of fun dynamic tricks, so you can probably get away with a small library. But then I thought what about other high level languages, the ones with large standard libraries and wondered how big the standard are for python/ruby/haskell/pharo(smalltalk)/*ml/ect. (libraries not vm stuff to the degree its possible to separate it) Anybody know? Any details (comment/blank/code lines , test code lines, lines in language vs lines in ffi/byte-code) are appreciated! edit: ps. since it started this me asking about jQuery as a bonus if you could please list the size of mega frameworks, a megaframewok provides so much that people using an x megaframework in language y might sometimes refer to programming in xy or even x rather then in y (ie. : qt, jQuery, etc.).

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • How do I create a python module from a fortran program with f2py?

    - by Lars Hellemo
    I am trying to read some smps files with python, and found a fortran implementation, so I thought I would give f2py a shot. The problem is that I have no experience with fortran. I have successfully installed gfortran and f2py on my Linux box and ran the example on thew f2py page, but I have some trouble compiling and running the large program. There are two files, one with a file reader wrapper and one with all the logic. They seem to call each other, but when I compile and link or try f2py, I get errors that they somehow can't find each other: f95 -c FILEWR~1.F f95 -c SMPSREAD.F90 f95 -o smpsread SMPSREAD.o FILEWR~1.o FILEWR~1.o In function `file_wrapper_' FILEWR~1.F(.text+0x3d) undefined reference to `chopen_' usrlibgcci486-linux-gnu4.4.1libgfortranbegin.a(fmain.o) In function `main' (.text+0x27) undefined reference to `MAIN__' collect2 ld returned 1 exit status I also tried changing the name to FILE_WRAPPER.F but that did not help. With f2py I found out I had to include a comment to get it to accept free format, and saved this as a new file and tried: f2py -c -m smpsread smpsread.f90 I get a lot of output and warnings, but the error seems to be this one: getctype: No C-type found in "{'typespec': 'type', 'attrspec': ['allocatable'], 'typename': 'node', 'dimension': [':']}", assuming void. The fortran 90 spms reader can be found here. Any help or suggestions appreciated.

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >