Search Results

Search found 10417 results on 417 pages for 'large'.

Page 337/417 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Resize my Google Map API

    - by Jeremy Flaugher
    I am new to JS, and have found the answer to a previous question, which brought up a new question, which brought me here again. I have a Reveal Modal that contains a Google Map API. When a button is clicked, the Reveal Modal pops up, and displays the Google Map. My problem is that only a third of the map is displaying. This is because a resize trigger needs to be implemented. My question is how do I implement the google.maps.event.trigger(map, 'resize')? Where do I place this little snippet of code? The Reveal Model script: <script type="text/javascript"> $(document).ready(function() { $('#myModal1').click(function() { $('#myModal').reveal(); }); }); </script> My Google Map Script: <script type="text/javascript"> function initialize() { var mapOptions = { center: new google.maps.LatLng(39.739318, -89.266507), zoom: 5, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); } </script> The div which holds the Google Map: <div id="myModal" class="reveal-modal large"> <h2>How to get here</h2> <div id="map_canvas" style="width:600px; height:300px;"></div> <a class="close-reveal-modal">&#215;</a> </div>

    Read the article

  • Valgrind 'noise', what does it mean?

    - by Chris Huang-Leaver
    When I used valgrind to help debug an app I was working on I notice a huge about of noise which seems to be complaining about standard libraries. As a test I did this; echo 'int main() {return 0;}' | gcc -x c -o test - Then I did this; valgrind ./test ==1096== Use of uninitialised value of size 8 ==1096== at 0x400A202: _dl_new_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400607F: _dl_map_object_from_fd (in /lib64/ld-2.10.1.so) ==1096== by 0x4007A2C: _dl_map_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400199A: map_doit (in /lib64/ld-2.10.1.so) ==1096== by 0x400D495: _dl_catch_error (in /lib64/ld-2.10.1.so) ==1096== by 0x400189E: do_preload (in /lib64/ld-2.10.1.so) ==1096== by 0x4003CCD: dl_main (in /lib64/ld-2.10.1.so) ==1096== by 0x401404B: _dl_sysdep_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4001471: _dl_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4000BA7: (within /lib64/ld-2.10.1.so) * large block of similar snipped * ==1096== Use of uninitialised value of size 8 ==1096== at 0x4F35FDD: (within /lib64/libc-2.10.1.so) ==1096== by 0x4F35B11: (within /lib64/libc-2.10.1.so) ==1096== by 0x4A1E61C: _vgnU_freeres (vg_preloaded.c:60) ==1096== by 0x4E5F2E4: __run_exit_handlers (in /lib64/libc-2.10.1.so) ==1096== by 0x4E5F354: exit (in /lib64/libc-2.10.1.so) ==1096== by 0x4E48A2C: (below main) (in /lib64/libc-2.10.1.so) ==1096== ==1096== ERROR SUMMARY: 3819 errors from 298 contexts (suppressed: 876 from 4) ==1096== malloc/free: in use at exit: 0 bytes in 0 blocks. ==1096== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==1096== For counts of detected errors, rerun with: -v ==1096== Use --track-origins=yes to see where uninitialised values come from ==1096== All heap blocks were freed -- no leaks are possible. You can see the full result here: http://pastebin.com/gcTN8xGp I have two questions; firstly is there a way to suppress all the noise? --show-below-main is set to no by default, but there doesn't appear to be a --show-after-main equivalent.

    Read the article

  • How to show jQuery is working/waiting/loading ?

    - by Kim
    How do I show a loading picture and/or message to the user ? If possible, would I like to know how to make it full screen with grey-dimmed background. Kinda like when viewing the large picture from some galleries (sorry, no example link). Currently do I add a CSS class which has the image and other formating, but its not working for some reason. Results from the search do get inserted into the div. HTML <div id="searchresults"></div> CSS div.loading { background-image: url('../img/loading.gif'); background-position: center; width: 80px; height: 80px; } JS $(document).ready(function(){ $('#searchform').submit(function(){ $('#searchresults').addClass('loading'); $.post('search.php', $('#'+this.id).serialize(),function(result){ $('#searchresults').html(result); }); $('#searchresults').removeClass('loading'); return false; }); });

    Read the article

  • How to replace custom IDs in the order of their appearance with a shell script?

    - by Péter Török
    I have a pair of rather large log files with very similar content, except that some identifiers are different between the two. A couple of examples: UnifiedClassLoader3@19518cc | UnifiedClassLoader3@d0357a JBossRMIClassLoader@13c2d7f | JBossRMIClassLoader@191777e That is, wherever the first file contains UnifiedClassLoader3@19518cc, the second contains UnifiedClassLoader3@d0357a, and so on. I want to replace these with identical IDs so that I can spot the really important differences between the two files. I.e. I want to replace all occurrences of both UnifiedClassLoader3@19518cc in file1 and UnifiedClassLoader3@d0357a in file2 with UnifiedClassLoader3@1; all occurrences of both JBossRMIClassLoader@13c2d7f in file1 and JBossRMIClassLoader@191777e in file2 with JBossRMIClassLoader@2 etc. Using the Cygwin shell, so far I managed to list all different identifiers occurring in one of the files with grep -o -e 'ClassLoader[0-9]*@[0-9a-f][0-9a-f]*' file1.log | sort | uniq However, now the original order is lost, so I don't know which is the pair of which ID in the other file. With grep -n I can get the line number, so the sort would preserve the order of appearance, but then I can't weed out the duplicate occurrences. Unfortunately grep can not print only the first match of a pattern. I figured I could save the list of identifiers produced by the above command into a file, then iterate over the patterns in the file with grep -n | head -n 1, concatenate the results and sort them again. The result would be something like 2 ClassLoader3@19518cc 137 ClassLoader@13c2d7f 563 ClassLoader3@1267649 ... Then I could (either manually or with sed itself) massage this into a sed command like sed -e 's/ClassLoader3@19518cc/ClassLoader3@2/g' -e 's/ClassLoader@13c2d7f/ClassLoader@137/g' -e 's/ClassLoader3@1267649/ClassLoader3@563/g' file1.log > file1_processed.log and similarly for file2. However, before I start, I would like to verify that my plan is the simplest possible working solution to this. Is there any flaw in this approach? Is there a simpler way?

    Read the article

  • Is it safe to use random Unicode for complex delimiter sequences in strings?

    - by ccomet
    Question: In terms of program stability and ensuring that the system will actually operate, how safe is it to use chars like ¦, § or ‡ for complex delimiter sequences in strings? Can I reliable believe that I won't run into any issues in a program reading these incorrectly? I am working in a system, using C# code, in which I have to store a fairly complex set of information within a single string. The readability of this string is only necessary on the computer side, end-users should only ever see the information after it has been parsed by the appropriate methods. Because some of the data in these strings will be collections of variable size, I use different delimiters to identify what parts of the string correspond to a certain tier of organization. There are enough cases that the standard sets of ;, |, and similar ilk have been exhausted. I considered two-char delimiters, like ;# or ;|, but I felt that it would be very inefficient. There probably isn't that large of a performance difference in storing with one char versus two chars, but when I have the option of picking the smaller option, it just feels wrong to pick the larger one. So finally, I considered using the set of characters like the double dagger and section. They only take up one char, and they are definitely not going to show up in the actual text that I'll be storing, so they won't be confused for anything. But character encoding is finicky. While the visibility to the end user is meaningless (since they, in fact, won't see it), I became recently concerned about how the programs in the system will read it. The string is stored in one database, while a separate program is responsible for both encoding and decoding the string into different object types for the rest of the application to work with. And if something is expected to be written one way, is possibly written another, then maybe the whole system will fail and I can't really let that happen. So is it safe to use these kind of chars for background delimiters?

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • .Net System.Mail.Message adding multiple "To" addresses

    - by Matt Dawdy
    I just hit something I think is inconsistent, and wanted to see if I'm doing something wrong, if I'm an idiot, or... MailMessage msg = new MailMessage(); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); Really only sends this email to 1 person, the last one. To add multiples I have to do this: msg.To.Add("[email protected],[email protected],[email protected],[email protected]"); I don't get it. I thought I was adding multiple people to the "To" address collection, but what I was doing was replacing it. I think I just realized my error -- to add one item to the collection, use .To.Add(new MailAddress("[email protected]")) If you use just a string, it replaces everything it had in its list. Ugh. I'd consider this a rather large gotcha! Since I answered my own question, but I think this is of value to have in the stackoverflow archive, I'll still ask it. Maybe someone even has an idea of other traps that you can fall into.

    Read the article

  • Can a C# method chain be "too long"?

    - by ccornet
    Not in terms of readability, naturally, since you can always arrange the separate methods into separate lines. Rather, is it dangerous, for any reason, to chain an excessively large number of methods together? I use method chaining primarily to save space on declaring individual one-use variables, and traditionally using return methods instead of methods that modify the caller. Except for string methods, those I kinda chain mercilessly. In any case, I worry sometimes about the impact of using exceptionally long method chains all in one line. Let's say I need to update the value of one item based on someone's username. Unfortunately, the shortest method to retrieve the correct user looks something like the following. SPWeb web = GetWorkflowWeb(); SPList list2 = web.Lists["Wars"]; SPListItem item2 = list2.GetItemById(3); SPListItem item3 = item2.GetItemFromLookup("Armies", "Allied Army"); SPUser user2 = item2.GetSPUser("Commander"); SPUser user3 = user2.GetAssociate("Spouse"); string username2 = user3.Name; item1["Contact"] = username2; Everything with a 2 or 3 lasts for only one call, so I might condense it as the following (which also lets me get rid of a would-be-superfluous 1): SPWeb web = GetWorkflowWeb(); item["Contact"] = web.Lists["Armies"] .GetItemById(3) .GetItemFromLookup("Armies", "Allied Army") .GetSPUser("Commander") .GetAssociate("Spouse") .Name; Admittedly, it looks a lot longer when it is all in one line and when you have int.Parse(ddlArmy.SelectedValue.CutBefore(";#", false)) instead of 3. Nevertheless, this is one of the average lengths of these chains, and I can easily foresee some of exceptionally longer counts. Excluding readability, is there anything I should be worried about for these 10+ method chains? Or is there no harm in using really really long method chains?

    Read the article

  • Programmatically controlled virtual drive

    - by Robert Lin
    How would I go about creating a virtual drive with which I can programmatically and dynamically change the contents? For instance, program A starts running and creates a virtual drive. When program B looks in the drive, it sees an error log and starts reading/processing it. In the middle of all this program A gets a signal from somewhere and decides to add to the log. I want program B to be unaware of the change and just keep on going. Program B should continue reading as if nothing happened. Program A would just report a rediculously large file size for the log and then fill it in as appropriate. Program A would fill the log with tags if program B tries to read past the last entry. I know this is a weird request but there's really no other way to do this... I basically can't rewrite program B so I need to fool it. How do I do this in windows? How about OSX?

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

  • Branch view for a file that has been split into multiple files

    - by ScottJ
    I have a large source file in Perforce that has been split up into several smaller files in a branch. I want to create a branch view that can handle this, but perforce (2009.1) only sees the last of the multiple files. For example, I created: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c Later I split the huge file into smaller ones: p4 integrate //depot/new/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/new/huge_file.c //depot/new/small_file_three.c Then edit each of those (including //depot/new/huge_file.c) and submit. Now I make changes to //depot/original/huge_file.c and I want to integrate those changes to //depot/new. If I do this manually, it works fine: p4 integrate //depot/original/huge_file.c //depot/new/huge_file.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_one.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_two.c p4 integrate //depot/original/huge_file.c //depot/new/small_file_three.c But I don't want to do that every time I integrate -- this kind of thing belongs in a branch view. Unfortunately if the branch view includes the same source file multiple times, the subsequent lines override the earlier ones. How can I create a branch view like this: //depot/original/huge_file.c //depot/new/huge_file.c //depot/original/huge_file.c //depot/new/small_file_one.c //depot/original/huge_file.c //depot/new/small_file_two.c //depot/original/huge_file.c //depot/new/small_file_three.c When I integrate using this branch spec, I get only small_file_three.c integrated.

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • MySQL - optimising selection across two linked tables

    - by user293594
    I have two MySQL tables, states and trans: states (200,000 entries) looks like: id (INT) - also the primary key energy (DOUBLE) [other stuff] trans (14,000,000 entries) looks like: i (INT) - a foreign key referencing states.id j (INT) - a foreign key referencing states.id A (DOUBLE) I'd like to search for all entries in trans with trans.A 30. (say), and then return the energy entries from the (unique) states referenced by each matching entry. So I do it with two intermediate tables: CREATE TABLE ij SELECT i,j FROM trans WHERE A30.; CREATE TABLE temp SELECT DISTINCT i FROM ij UNION SELECT DISTINCT j FROM ij; SELECT energy from states,temp WHERE id=temp.i; This seems to work, but is there any way to do it without the intermediate tables? When I tried to create the temp table with a single command straight from trans: CREATE TABLE temp SELECT DISTINCT i FROM trans WHERE A30. UNION SELECT DISTINCT j FROM trans WHERE A30.; it took a longer (presumably because it had to search the large trans table twice. I'm new to MySQL and I can't seem to find an equivalent problem and answer out there on the interwebs. Many thanks, Christian

    Read the article

  • Serial: write() throttling?

    - by damian
    Hi everyone, I'm working on a project sending serial data to control animation of LED lights, which need to stay in sync with a sound engine. There seems to be a large serial write buffer (OSX (POSIX) + FTDI chipset usb serial device), so without manually restricting the transmission rate, the animation system can get several seconds ahead of the serial transmission. Currently I'm manually restricting the serial write speed to the baudrate (8N1 = 10 bytes serial frame per 8 bytes data, 19200 bps serial - 1920 bytes per second max), but I am having a problem with the sound drifting out of sync over time - it starts fine, but after 10 minutes there's a noticeable (100ms+) lag between the sound and the lights. This is the code that's restricting the serial write speed (called once per animation frame, 'elapsed' is the duration of the current frame, 'baudrate' is the bps (19200)): void BufferedSerial::update( float elapsed ) { baud_timer += elapsed; if ( bytes_written > 1024 ) { // maintain baudrate float time_should_have_taken = (float(bytes_written)*10)/float(baudrate); float time_actually_took = baud_timer; // sleep if we have > 20ms lag between serial transmit and our write calls if ( time_should_have_taken-time_actually_took > 0.02f ) { float sleep_time = time_should_have_taken - time_actually_took; int sleep_time_us = sleep_time*1000.0f*1000.0f; //printf("BufferedSerial::update sleeping %i ms\n", sleep_time_us/1000 ); delayUs( sleep_time_us ); // subtract 128 bytes bytes_written -= 128; // subtract the time it should have taken to write 128 bytes baud_timer -= (float(128)*10)/float(baudrate); } } } Clearly there's something wrong, somewhere. A much better approach would be to be able to determine the number of bytes currently in the transmit queue, and try and keep that below a fixed threshold. Any advice appreciated.

    Read the article

  • Bizarre problem with WPF XAML file.

    - by paxdiablo
    I've just started a very simple WPF application which consists of a main large image and four smaller images. In order to assist with the layout, I created some JPEGs in MsPaint containing the images -2, -1, 0, +1 and +2 and just copied them into the top level of the project directory. The XAML segment contains, for the five images: <Image Grid.Column="1" Grid.Row="2" Grid.ColumnSpan="4" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicture" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/zero.jpg" <Image Grid.Column="1" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus2.jpg" <Image Grid.Column="2" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicMinus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/minus1.jpg" <Image Grid.Column="3" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus1" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus1.jpg" <Image Grid.Column="4" Grid.Row="4" Grid.ColumnSpan="1" Grid.RowSpan="1" Margin="0,0,0,0" Name="imgPicPlus2" Stretch="Fill" VerticalAlignment="Top" Source="file:///C:/DAndS/Pax/MyDocs/VS2008/Projects/MyProj/plus2.jpg" When I try to set the source property for the plus2 image, it complains with a dialog box stating: Property value is not valid. Details | V The file plus2.jpg is not part of the project or its 'Build Action' property is not set to 'Resource'. Yet if I rename the file to plus3.jpg or plus2x.jpg, I don't have that problem. Why is it complaining about plus2.jpg specifically?

    Read the article

  • Use a non-coalescing parser in Axis2

    - by Nathan
    Does anyone know how I can get Axis2 to use a non-coalescing XMLStreamReader when it parses a SOAP message? I am writing code that reads a large base64 binary text element. Coalescing is the default behaviour, and this causes the default XMLStreamReader to load the entire text into memory rather than returning multiple CHARACTERS events. The upshot of this is that I run out of heap space when running the following code: reader = element.getTextAsStream( true ); The OutOfMemory error occurs in com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next: java.lang.OutOfMemoryError: Java heap space at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:208) at com.sun.org.apache.xerces.internal.util.XMLStringBuffer.append(XMLStringBuffer.java:226) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanContent(XMLDocumentFragmentScannerImpl.java:1552) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2864) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:607) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:116) at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:558) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.DisallowDoctypeDeclStreamReaderWrapper.next(DisallowDoctypeDeclStreamReaderWrapper.java:34) at org.apache.axiom.util.stax.wrapper.XMLStreamReaderWrapper.next(XMLStreamReaderWrapper.java:225) at org.apache.axiom.util.stax.dialect.SJSXPStreamReaderWrapper.next(SJSXPStreamReaderWrapper.java:138) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:668) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:214) at org.apache.axiom.om.impl.llom.SwitchingWrapper.updateNextNode(SwitchingWrapper.java:1098) at org.apache.axiom.om.impl.llom.SwitchingWrapper.<init>(SwitchingWrapper.java:198) at org.apache.axiom.om.impl.llom.OMStAXWrapper.<init>(OMStAXWrapper.java:73) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:67) at org.apache.axiom.om.impl.llom.OMContainerHelper.getXMLStreamReader(OMContainerHelper.java:40) at org.apache.axiom.om.impl.llom.OMElementImpl.getXMLStreamReader(OMElementImpl.java:790) at org.apache.axiom.om.impl.llom.OMElementImplUtil.getTextAsStream(OMElementImplUtil.java:114) at org.apache.axiom.om.impl.llom.OMElementImpl.getTextAsStream(OMElementImpl.java:826) at org.example.UploadFileParser.invokeBusinessLogic(UploadFileParser.java:160)

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • Row selection based on subtable data in MySQL

    - by Felthragar
    I've been struggling with this selection for a while now, I've been searching around StackOverflow and tried a bunch of stuff but nothing that helps me with my particular issue. Maybe I'm just missing something obvious. I have two tables: Measurements, MeasurementFlags "Measurements" contain measurements from a device, and each measurement can have properties/attributes attached to them (commonly known as "flags") to signify that the measurement in question is special in some way or another (for instance, one flag may signify a test or calibration measurement). Note: One record per flag! Right, so a record from the "Measurements" table can theoreticly have an unlimited amount of MeasurementFlags attached to it, or it can have none. Now, I need to select records from "Measurements", that have an attached "MeasurementFlag" valued "X", but it must also NOT have a flag valued "Y" attached to it. We're talking about a fairly large database with hundreds of millions of rows, which is why I'm trying to keep all of this logic within one query. Splitting it up would create too many queries, however if it's not possible to do in one query I guess I don't have a choise. Thanks in advance.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • JSP: How can I still get the code on my error page to run, even if I can't display it?

    - by Josh Hinman
    I've defined an error-page in my web.xml: <error-page> <exception-type>java.lang.Exception</exception-type> <location>/error.jsp</location> </error-page> In that error page, I have a custom tag that I created. The tag handler for this tag e-mails me the stacktrace of whatever error occurred. For the most part this works great. Where it doesn't work great is if the output has already begun being sent to the client at the time the error occurs. In that case, we get this: SEVERE: Exception Processing ErrorPage[exceptionType=java.lang.Exception, location=/error.jsp] java.lang.IllegalStateException I believe this error happens because we can't redirect a request to the error page after output has already started. The work-around I've used is to increase the buffer size on particularly large JSP pages. But I'm trying to write a generic error handler that I can apply to existing applications, and I'm not sure it's feasible to go through hundreds of JSP pages making sure their buffers are big enough. Is there a way to still allow my stack trace e-mail code to execute in this case, even if I can't actually display the error page to the client?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >