Search Results

Search found 10417 results on 417 pages for 'large'.

Page 337/417 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Programmtically choosing image conversion format to JPEG or PNG for Silverlight display

    - by Otaku
    I have a project where I need to convert a large number of image types to be displayable in a Silverlight app - TIFF, GIF, WMF, EMF, BMP, DIB, etc. I can do these conversions on the server before hydrating the Silverlight app. However, I'm not sure when I should choose to convert to which format, either JPG or PNG. Is there some kind of standard out there like TIFF should always be a JPEG and GIF should always be a PNG. Or, if a BMP is 24 bit, it should be converted to a JPEG - any lower and it can be a PNG. Or everything is a PNG and why? What I usually see or see in response to this type of question is "Well, if the picture is a photograph, go with JPEG" or "If it has straight lines, PNG is better." Unfortunately, I won't have the luxury of viewing any of the image files at all and would like just a standard way to do this via code, even if that is a zillion if/then statements. Are there any standards or best practices around this subject? P.S. Please don't move to close this subject - it actually has no duplicate on SO because I'm not looking for subjectivity.

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • TypeScript - separating code output

    - by Andrea Baccega
    i'm trying typescript and I find it very useful. I've a quite large project and i was considering rewriting it using typescript. The main problem here is the following: file A.ts: class A extends B { // A stuff } file B.ts: class B { // B stuff } If I compile A.ts with this command: tsc --out compiledA.js A.ts I'll get error from the compiler cause he doesn't know how to threat the "B" after extends. So, a "solution" would be including in A.ts (as first line of code): /// <reference path="./B.ts" /> Compiling again A.ts with the same command tsc --out compiledA.js A.ts Will result in compiledA.js containing both B.ts and A.ts code. ( which could be very nice ) In my case, I only need to compile the A.ts code in the compiledA.js file and I don't want the B.ts stuff to be in there. Indeed, what I want is: tsc --out A.js A.ts = compile only the A.ts stuff tsc --out B.js B.ts = compile only the B.ts stuff I can do it by removing the "extends" keyword but doing that I'll loose most of the typescript goodness. Can someone telll me if there's a way to do this ?

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • Looking for Bugtracker with specific features

    - by Thorsten Dittmar
    Hi, we're looking into bugtracking systems at our firm. We're quite small (4 developers only). On the other hand we have quite a large number of customers we develop individual software for. Most software is built explicitly for one customer, apart from two or three standard tools we ship. To make support easier for us (and to avoid being interrupted by phone calls all the time) we're looking for a bugtracker that must support a specific set of features. We want the customers to report bugs/feature/change requests themselves and be notified about these reports by email. Then we'd like to track what we've done and how much time it took, notifying the customer about that per email (private notes for just us must be possible). At the end of the month we'd like to bill all closed reports according to the time it took to solve/implement them. The following must be possible: It must have a web based interface where the users must log in with credentials we provide. The users must not be able to create accounts themselves/we must be able to turn off such a feature. We must be able to configure projects and assign customer logins to these projects. The customers must only see projects they are assigned to, not any other projects. Also, customers must not "see" other customers. We would name the projects, so that standard tools are listed as separate projects for each customer. A monthly report must be available that we can use to get information about the requests we worked on per customer. I'd like to introduce some standard product like Mantis (I've played with that a little, but didn't quite figure out whether it provides all the features I listed above). The product should be Open Source and work on a XAMPP Windows Server 2003 environment. Does anybody have any good suggestions?

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Cross-reference js-object variables when creating object

    - by Ivar Bonsaksen
    Summary: I want to know if it is possible to do something like this: {a: 'A',b: this.a} ...by using some other pointer like {a: 'A',b: self.a} or {a: 'A',b: own.a} or anything else... Full question: I'm trying to extend MyBaseModule using Ext.extend, and need to cross-reference values in the extension object passed to Ext.extend(). Since I'm not yet in context of MyModule, I'm not able to use this to reference the object (See example below line 12). Is there any other way to reference values like this without creating the object first? 1 MyModule = Ext.extend(MyBaseModule, { 2 dataStores: { 3 myDataStore: new Ext.data.Store({...}) 4 }, 5 6 myGridDefinition: { 7 id: 'myGridDefinitionPanel', 8 bodyBorder: false, 9 items: [{ 10 xtype: 'grid', 11 id: 'myGridDefinitionGrid', 12 store: this.dataStores.myDataStore 13 }] 14 } 15 }); Or is the following the only solution? I would like to avoid this if possible, as I find it less readable for large extension definitions. 1 var extensionObject = { 2 dataStores: { 3 myDataStore: new Ext.data.Store({...}) 4 }, 5 6 myGridDefinition: { 7 id: 'myGridDefinitionPanel', 8 bodyBorder: false, 9 items: [{ 10 xtype: 'grid', 11 id: 'myGridDefinitionGrid' 12 }] 13 } 14 }; 15 16 extensionObject.locationsGrid.items[0].store = extensionObject.dataStores.locations; 17 18 MyModule = Ext.extend(MyBaseModule, extensionObject);

    Read the article

  • OpenGL Colour Interpolation

    - by Will-of-fortune
    I'm currently working on an little project in C++ and OpenGL and am trying to implement a colour selection tool similar to that in photoshop, as below. However I am having trouble with interpolation of the large square. Working on my desktop computer with a 8800 GTS the result was similar but the blending wasn't as smooth. This is the code I am using: GLfloat swatch[] = { 0,0,0, 1,1,1, mR,mG,mB, 0,0,0 }; GLint swatchVert[] = { 400,700, 400,500, 600,500, 600,700 }; glVertexPointer(2, GL_INT, 0, swatchVert); glColorPointer(3, GL_FLOAT, 0, swatch); glDrawArrays(GL_QUADS, 0, 4); Moving onto my laptop with Intel Graphics HD 3000, this result was even worse with no change in code. I thought it was OpenGL splitting the quad into two triangles, so I tried rendering using triangles and interpolating the colour in the middle of the square myself but it still doesnt quite match the result I was hoping for. Any help would be appreciated. Thanks.

    Read the article

  • .Net System.Mail.Message adding multiple "To" addresses

    - by Matt Dawdy
    I just hit something I think is inconsistent, and wanted to see if I'm doing something wrong, if I'm an idiot, or... MailMessage msg = new MailMessage(); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); Really only sends this email to 1 person, the last one. To add multiples I have to do this: msg.To.Add("[email protected],[email protected],[email protected],[email protected]"); I don't get it. I thought I was adding multiple people to the "To" address collection, but what I was doing was replacing it. I think I just realized my error -- to add one item to the collection, use .To.Add(new MailAddress("[email protected]")) If you use just a string, it replaces everything it had in its list. Ugh. I'd consider this a rather large gotcha! Since I answered my own question, but I think this is of value to have in the stackoverflow archive, I'll still ask it. Maybe someone even has an idea of other traps that you can fall into.

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • Branch view for a file that has been split into multiple files

    - by ScottJ
    I have a large source file in Perforce that has been split up into several smaller files in a branch. I want to create a branch view that can handle this, but perforce (2009.1) only sees the last of the multiple files. For example, I created: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c Later I split the huge file into smaller ones: p4 integrate //depot/new/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_three.c Then edit each of those (including //depot/new/huge_file.c) and submit. Now I make changes to //depot/original/huge_file.c and I want to integrate those changes to //depot/new. If I do this manually, it works fine: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_three.c But I don't want to do that every time I integrate -- this kind of thing belongs in a branch view. Unfortunately if the branch view includes the same source file multiple times, the subsequent lines override the earlier ones. How can I create a branch view like this: //depot/original/huge_file.c //depot/new/huge_file.c //depot/original/huge_file.c //depot/new/small_file_one.c //depot/original/huge_file.c //depot/new/small_file_two.c //depot/original/huge_file.c //depot/new/small_file_three.c When I integrate using this branch spec, I get only small_file_three.c integrated.

    Read the article

  • Java/XML: Good "Stream-based" Alternative to JAXB?

    - by Jan
    Hello Experts, JAXB makes working with XML so much easier, but I have currently a big problem, that the documents I have to process are too large for an in memory unmarshalling that JAXB does. The data can be up to 4GB per document. The datastructure I will have to process is very simple and flat: With a root element and millions of “elements”… <root> <element> <sub>foo</sub> </element> <element> <sub>foo</sub> </element> </root> May questions are: Does JAXB maybe somehow support unmarshalling in a “streambased” way, that does not require to build the whole objecttree in memory but rather gives me some kind of “Iterator” to the elements, element by element? (Maybe I just missed that somehow…) If not what are your proposals for an good alternative with a a. “flat learningcurve, ideally very similar to JAXB b. AND VERY IMPORTANT: Ideally with the possibility / tool for the generation of the unarshaller code from an XSD file OR annotated Java Class 3.(I have searched SO and those to library that ended up on my “watchlist” (without comparing them closer) were Apache XML Beans and Xstream… What other libraries are maybe even better for the purpose and what are the disadvantages, adavangaes… Thank you very much!!! Jan

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • iOS MapKit: Selected MKAnnotation coordinates.

    - by Oh Danny Boy
    Using the code at the following tutorial, http://www.zenbrains.com/blog/en/2010/05/detectar-cuando-se-selecciona-una-anotacion-mkannotation-en-mapa-mkmapview/, I was able to add an observer to each MKAnnotation and receive a notification of selected/deselected states. I am attempting to add a UIView on top of the selection annotation to display relevant information about the location. This information cannot be conveyed in the 2 lines allowed (Title/Subtitle) for the pin's callout. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { Annotation *a = (Annotation *)object; // Alternatively attempted using: //Annotation *a = (Annotation *)[mapView.selectedAnnotations objectAtIndex:0]; NSString *action = (NSString *)context; if ([action isEqualToString:ANNOTATION_SELECTED_DESELECTED]) { BOOL annotationSelected = [[change valueForKey:@"new"] boolValue]; if (annotationSelected) { // Actions when annotation selected CGPoint origin = a.frame.origin; NSLog(@"origin (%f, %f) ", origin.x, origin.y); // Test UIView *v = [[UIView alloc] init]; [v setBackgroundColor:[UIColor orangeColor]]; [v setFrame:CGRectMake(origin.x, origin.y , 300, 300)]; [self.view addSubview:v]; [v release]; }else { // Accions when annotation deselected } } } Results using Annotation *a = (Annotation *)object origin (154373.000000, 197135.000000) origin (154394.000000, 197152.000000) origin (154445.000000, 197011.000000) Results using Annotation *a = (Annotation *)[mapView.selectedAnnotations objectAtIndex:0]; origin (0.000000, 0.000000) origin (0.000000, 0.000000) origin (0.000000, 0.000000) The numbers are large. They are not relative to the view (1024 x 768). I believe they are relative to the entire map. How would I be able to detect the exact coordinates relative to the entire view so that I can appropriately position my view?

    Read the article

  • How to contain the Deepwater Horizon oil spill? [closed]

    - by Yarin
    This is obviously not programming, but it's important and we're smart people, so let's give it a shot. (BP has actually begun soliciting suggestions for how to deal with the crisis http://www.deepwaterhorizonresponse.com/go/doc/2931/546759/, confirming that they don't have a clue) I'll start with my own proposal... Anchored Chute: A large-diameter, collapsible, flexible tube/hose with a wide mouth on one end is anchored over the leak. There's no need for a hermetic seal, the opening just needs to be big enough to form a canopy over the leak area. The rest of the tubing can just be dumped on the sea floor. Since oil is denser than water, the oily water that flows into the mouth eventually inflates the tube and raises the opposite end to the surface, where it can be collected (Like those inflatable dancing air socks at car dealerships). Further buoyancy could be added with floats attached to the tube at intervals. I think this method would not be as susceptible to the problems BP had with the containment dome, where a rigid, metal casing froze up with crystallized hydrates, as we would not be trying to contain the full pressure of the well, but would be using the natural buoyancy of the oil to channel its flow, and with a much larger opening.

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • How can I visualise a "broken" hierarchical dataset?

    I have a reasonably large datatable structured something like this: StaffNo Grade Direct Boss2 Boss3 Boss4 Boss5 Boss6 ------- ----- ----- ----- ----- ----- ----- ----- 10001 1 10002 10002 10057 10094 10043 10099 10002 2 10057 NULL 10057 10094 10043 10099 10003 1 10004 10004 10057 10094 10043 10099 10004 2 10057 NULL 10057 10094 10043 10099 10057 3 10094 NULL NULL 10094 10043 10099 etc.... i.e. a unique id , their level (grade) in the hierarchy, a record of their bosses ID and the IDs of the supervisors above. (The 2,3,4, etc refers to the boss at that particular grade). The system relies on a strict hierarchy - if you are my boss (/parent) then your boss must be my grandparent. Unfortunately this rule is not enforced within the data model and the data ultimately comes from other systems which don't even know about the rule, let alone observe it. So you and I may share the same boss, but our bosses boss won't be the same. note: I cannot change the data model I cannot fix the data at source. So (for the moment) I have to fix the data once it's in place. Once a fortnight someone will do something which breaks the model and I'll need to modify the procs slightly to resolve. Not ideal, but I'm stuck with this for the next six months. Anyway, specific queries are easy to produce but I find it hard to keep track of the bigger picutre. The application which sits on this runs without complaint regardless but navigating around the system becoming extraordinarily confusing. So my question is: Can anyone recommend a tool (or technique) for generating some kind of "broken tree" diagram in this sort of circumstances? I don't want something that will fix things for me, or attempt statistical analysis but at least something that will give a visual indication of how broken it is at any one time. Note : At the moment this is in a SQL Server database but I'm open to ideas utilising C#, Perl or Python.

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • JBoss deployment throws 'java.util.zip.ZipException: error in opening zip file' on Linux?

    - by Kaushalya
    I thought of posting both the question and the answer for others' knowledge. I deployed a large EAR (contained more than ~1024 jars/wars) on JBoss running with Java 6 on Linux, and the deployment process cried throwing the following exception: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file) at org.jboss.deployment.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:53) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:901) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:895) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:809) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) .... Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.util.file.JarArchiveBrowser.<init>(JarArchiveBrowser.java:74) at org.jboss.util.file.FileProtocolArchiveBrowserFactory.create(FileProtocolArchiveBrowserFactory.java:48) at org.jboss.util.file.ArchiveBrowser.getBrowser(ArchiveBrowser.java:57) at org.jboss.ejb3.EJB3Deployer.hasEjbAnnotation(EJB3Deployer.java:213) .... This was caused by the 'limit of number of open file descriptors' of Linux/Unix operating systems. The default is 1024. You can check the default value using: ulimit -n To increase the number of open file descriptors (say to 2048): ulimit -n 2048 Check the man page of ulimit for more details.

    Read the article

  • Can a C# method chain be "too long"?

    - by ccornet
    Not in terms of readability, naturally, since you can always arrange the separate methods into separate lines. Rather, is it dangerous, for any reason, to chain an excessively large number of methods together? I use method chaining primarily to save space on declaring individual one-use variables, and traditionally using return methods instead of methods that modify the caller. Except for string methods, those I kinda chain mercilessly. In any case, I worry sometimes about the impact of using exceptionally long method chains all in one line. Let's say I need to update the value of one item based on someone's username. Unfortunately, the shortest method to retrieve the correct user looks something like the following. SPWeb web = GetWorkflowWeb(); SPList list2 = web.Lists["Wars"]; SPListItem item2 = list2.GetItemById(3); SPListItem item3 = item2.GetItemFromLookup("Armies", "Allied Army"); SPUser user2 = item2.GetSPUser("Commander"); SPUser user3 = user2.GetAssociate("Spouse"); string username2 = user3.Name; item1["Contact"] = username2; Everything with a 2 or 3 lasts for only one call, so I might condense it as the following (which also lets me get rid of a would-be-superfluous 1): SPWeb web = GetWorkflowWeb(); item["Contact"] = web.Lists["Armies"] .GetItemById(3) .GetItemFromLookup("Armies", "Allied Army") .GetSPUser("Commander") .GetAssociate("Spouse") .Name; Admittedly, it looks a lot longer when it is all in one line and when you have int.Parse(ddlArmy.SelectedValue.CutBefore(";#", false)) instead of 3. Nevertheless, this is one of the average lengths of these chains, and I can easily foresee some of exceptionally longer counts. Excluding readability, is there anything I should be worried about for these 10+ method chains? Or is there no harm in using really really long method chains?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >