Search Results

Search found 10417 results on 417 pages for 'large'.

Page 337/417 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Partial Classes - are they bad design?

    - by dferraro
    Hello, I'm wondering why the 'partial class' concept even exists in .NET. I'm working on an application and we are reading a (actually very good) book relavant to the development platform we are implementing at work. In the book he provides a large code base /wrapper around the platform API and explains how he developed it as he teaches different topics about the platform development. Anyway, long story short - he uses partial classes, all over the place, as a way to fake multiple inheritence in C# (IMO). Why he didnt just split the classes up into multiple ones and use composition is beyond me. He will have 3 'partial class' files to make up his base class, each w/ 3-500 lines of code... And does this several times in his API. Do you find this justifiable? If it were me, I'd have followed the S.R.P. and created multiple classes to handle different required behaviors, then create a base class that has instances of these classes as members (e.g. composition). Why did MS even put partial class into the framework?? They removed the ability to expand/collapse all code at each scope level in C# (this was allowed in C++) because it was obviously just allowing bad habits - partial class is IMO the same thing. I guess my quetion is: Can you explain to me when there would be a legitimate reason to ever use a partial class? I do not mean this to be a rant / war thread. I'm honeslty looking to learn something here. Thanks

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • Memory leak when changing Text field of a Scintilla object.

    - by PlaZmaZ
    I have a relatively large program that I'm optimizing for ASCII input files around 10-80mB in size. The program reads every line of the file into a stringbuilder and then sets the Text field of the ScintillNET object to the stringbuilder. The stringbuilder is then set to null. private void ReloadFile(string sFile) { txt_log.ResetText(); try { StringBuilder sLine = new StringBuilder(""); using (StreamReader sr = new StreamReader(sFile)) { while (true) { string temp = sr.ReadLine(); if (temp == null) break; sLine.AppendLine(temp); } sr.Close(); } txt_log.Text = sLine.ToString(); sLine = null; } catch (Exception ex) { MessageBox.Show(this, "An error occurred opening this file.\n\n" + ex.Message, "File Open Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } GC.Collect(); } The program has an option to reload or open a file. This is irrelevant, as any call to txt_log.Text seems to not get rid of the previous memory used for the .Text field. Commenting out the txt_log.Text line gives proper memory behavior. The GC.Collect() line seems pointless, and I have tried both with and without it. Is there something I'm missing here? I HIGHLY doubt it's a problem with the ScintillaNET component itself--rather something in this code.

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Can a C# method chain be "too long"?

    - by ccornet
    Not in terms of readability, naturally, since you can always arrange the separate methods into separate lines. Rather, is it dangerous, for any reason, to chain an excessively large number of methods together? I use method chaining primarily to save space on declaring individual one-use variables, and traditionally using return methods instead of methods that modify the caller. Except for string methods, those I kinda chain mercilessly. In any case, I worry sometimes about the impact of using exceptionally long method chains all in one line. Let's say I need to update the value of one item based on someone's username. Unfortunately, the shortest method to retrieve the correct user looks something like the following. SPWeb web = GetWorkflowWeb(); SPList list2 = web.Lists["Wars"]; SPListItem item2 = list2.GetItemById(3); SPListItem item3 = item2.GetItemFromLookup("Armies", "Allied Army"); SPUser user2 = item2.GetSPUser("Commander"); SPUser user3 = user2.GetAssociate("Spouse"); string username2 = user3.Name; item1["Contact"] = username2; Everything with a 2 or 3 lasts for only one call, so I might condense it as the following (which also lets me get rid of a would-be-superfluous 1): SPWeb web = GetWorkflowWeb(); item["Contact"] = web.Lists["Armies"] .GetItemById(3) .GetItemFromLookup("Armies", "Allied Army") .GetSPUser("Commander") .GetAssociate("Spouse") .Name; Admittedly, it looks a lot longer when it is all in one line and when you have int.Parse(ddlArmy.SelectedValue.CutBefore(";#", false)) instead of 3. Nevertheless, this is one of the average lengths of these chains, and I can easily foresee some of exceptionally longer counts. Excluding readability, is there anything I should be worried about for these 10+ method chains? Or is there no harm in using really really long method chains?

    Read the article

  • .Net System.Mail.Message adding multiple "To" addresses

    - by Matt Dawdy
    I just hit something I think is inconsistent, and wanted to see if I'm doing something wrong, if I'm an idiot, or... MailMessage msg = new MailMessage(); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); Really only sends this email to 1 person, the last one. To add multiples I have to do this: msg.To.Add("[email protected],[email protected],[email protected],[email protected]"); I don't get it. I thought I was adding multiple people to the "To" address collection, but what I was doing was replacing it. I think I just realized my error -- to add one item to the collection, use .To.Add(new MailAddress("[email protected]")) If you use just a string, it replaces everything it had in its list. Ugh. I'd consider this a rather large gotcha! Since I answered my own question, but I think this is of value to have in the stackoverflow archive, I'll still ask it. Maybe someone even has an idea of other traps that you can fall into.

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • Branch view for a file that has been split into multiple files

    - by ScottJ
    I have a large source file in Perforce that has been split up into several smaller files in a branch. I want to create a branch view that can handle this, but perforce (2009.1) only sees the last of the multiple files. For example, I created: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c Later I split the huge file into smaller ones: p4 integrate //depot/new/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_three.c Then edit each of those (including //depot/new/huge_file.c) and submit. Now I make changes to //depot/original/huge_file.c and I want to integrate those changes to //depot/new. If I do this manually, it works fine: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_three.c But I don't want to do that every time I integrate -- this kind of thing belongs in a branch view. Unfortunately if the branch view includes the same source file multiple times, the subsequent lines override the earlier ones. How can I create a branch view like this: //depot/original/huge_file.c //depot/new/huge_file.c //depot/original/huge_file.c //depot/new/small_file_one.c //depot/original/huge_file.c //depot/new/small_file_two.c //depot/original/huge_file.c //depot/new/small_file_three.c When I integrate using this branch spec, I get only small_file_three.c integrated.

    Read the article

  • How to show jQuery is working/waiting/loading ?

    - by Kim
    How do I show a loading picture and/or message to the user ? If possible, would I like to know how to make it full screen with grey-dimmed background. Kinda like when viewing the large picture from some galleries (sorry, no example link). Currently do I add a CSS class which has the image and other formating, but its not working for some reason. Results from the search do get inserted into the div. HTML <div id="searchresults"></div> CSS div.loading { background-image: url('../img/loading.gif'); background-position: center; width: 80px; height: 80px; } JS $(document).ready(function(){ $('#searchform').submit(function(){ $('#searchresults').addClass('loading'); $.post('search.php', $('#'+this.id).serialize(),function(result){ $('#searchresults').html(result); }); $('#searchresults').removeClass('loading'); return false; }); });

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Row selection based on subtable data in MySQL

    - by Felthragar
    I've been struggling with this selection for a while now, I've been searching around StackOverflow and tried a bunch of stuff but nothing that helps me with my particular issue. Maybe I'm just missing something obvious. I have two tables: Measurements, MeasurementFlags "Measurements" contain measurements from a device, and each measurement can have properties/attributes attached to them (commonly known as "flags") to signify that the measurement in question is special in some way or another (for instance, one flag may signify a test or calibration measurement). Note: One record per flag! Right, so a record from the "Measurements" table can theoreticly have an unlimited amount of MeasurementFlags attached to it, or it can have none. Now, I need to select records from "Measurements", that have an attached "MeasurementFlag" valued "X", but it must also NOT have a flag valued "Y" attached to it. We're talking about a fairly large database with hundreds of millions of rows, which is why I'm trying to keep all of this logic within one query. Splitting it up would create too many queries, however if it's not possible to do in one query I guess I don't have a choise. Thanks in advance.

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • JBoss deployment throws 'java.util.zip.ZipException: error in opening zip file' on Linux?

    - by Kaushalya
    I thought of posting both the question and the answer for others' knowledge. I deployed a large EAR (contained more than ~1024 jars/wars) on JBoss running with Java 6 on Linux, and the deployment process cried throwing the following exception: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file) at org.jboss.deployment.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:53) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:901) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:895) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:809) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) .... Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.util.file.JarArchiveBrowser.<init>(JarArchiveBrowser.java:74) at org.jboss.util.file.FileProtocolArchiveBrowserFactory.create(FileProtocolArchiveBrowserFactory.java:48) at org.jboss.util.file.ArchiveBrowser.getBrowser(ArchiveBrowser.java:57) at org.jboss.ejb3.EJB3Deployer.hasEjbAnnotation(EJB3Deployer.java:213) .... This was caused by the 'limit of number of open file descriptors' of Linux/Unix operating systems. The default is 1024. You can check the default value using: ulimit -n To increase the number of open file descriptors (say to 2048): ulimit -n 2048 Check the man page of ulimit for more details.

    Read the article

  • MySQL - optimising selection across two linked tables

    - by user293594
    I have two MySQL tables, states and trans: states (200,000 entries) looks like: id (INT) - also the primary key energy (DOUBLE) [other stuff] trans (14,000,000 entries) looks like: i (INT) - a foreign key referencing states.id j (INT) - a foreign key referencing states.id A (DOUBLE) I'd like to search for all entries in trans with trans.A 30. (say), and then return the energy entries from the (unique) states referenced by each matching entry. So I do it with two intermediate tables: CREATE TABLE ij SELECT i,j FROM trans WHERE A30.; CREATE TABLE temp SELECT DISTINCT i FROM ij UNION SELECT DISTINCT j FROM ij; SELECT energy from states,temp WHERE id=temp.i; This seems to work, but is there any way to do it without the intermediate tables? When I tried to create the temp table with a single command straight from trans: CREATE TABLE temp SELECT DISTINCT i FROM trans WHERE A30. UNION SELECT DISTINCT j FROM trans WHERE A30.; it took a longer (presumably because it had to search the large trans table twice. I'm new to MySQL and I can't seem to find an equivalent problem and answer out there on the interwebs. Many thanks, Christian

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >