Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 616/1833 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • Looking for Antlr Grammar syntaxt highlight in VS2010

    - by David Mårtensson
    I am looking for some way to edit antlr grammar files directly within VS2010 with syntax highlight. I have used antlrworks a lot but it has the drawback that I have to start antlrworks separately and then browse to the file I want to edit, do the changed and save. For minor fixes I do not need all the tools in Antlrworks but I still would like the syntax highlight. But I have not been able to get VS2010 to open antlrworks with the right file and I have found no other way to get syntax highlight directly within VS2010 editor, it just opens as plain text. I can get visual studio to open antlrworks but it will open with only the last set of files it had open, not the one I clicked on. So my question(s) are: Is there a way to get antlrworks to open with the right file when I double click in it in visual studio project explorer? Is there any other way to get correct syntax highlight for antlr grammar files within visual studio (or with another editor, preferably not one that costs money, but if there are no free ones a commercial one might be an option).

    Read the article

  • Statically Compiling Qt 4.6.2

    - by geeko
    This what I did but it results in errors: 1: In win32-msvc2008\qmake.conf I set QMAKE_CFLAGS_RELEASE = -O1 -Og -GL -MD 2: From MSVC2008 CMD I run vcvarsall.bat x86 and vcvars32.bat "C:\Program Files\Microsoft Visual Studio 9.0\VC\bin 3: From Qt 4.6.2 CMD I run the following C:\Qt\4.6.2configure -release -nomake examples -nomake demos -no-exceptions -n o-stl -no-rtti -no-qt3support -no-scripttools -no-openssl -no-opengl -no-webkit -no-phonon -no-style-motif -no-style-cde -no-style-cleanlooks -no-style-plastique -no-sql-sqlite -platform win32-msvc2008 -static -qt-libjpeg -qt-zlib -qt-libpng and then nmake However, I ended up every time with these errors: link /LIBPATH:"c:\Qt\4.6.2\lib" /LIBPATH:"c:\Qt\4.6.2\lib" /NOLOGO /INCR EMENTAL:NO /MANIFEST /MANIFESTFILE:"tmp\obj\release_static\assistant_adp.interme diate.manifest" /SUBSYSTEM:WINDOWS "/MANIFESTDEPENDENCY:type='win32' name='Micro soft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df ' language='' processorArchitecture=''" /OUT:......\bin\assistant_adp.exe @C :\DOCUME~1\Geeko\LOCALS~1\Temp\nm3F8.tmp fontpanel.obj : MSIL .netmodule or module compiled with /GL found; restarting li nk with /LTCG; add /LTCG to the link command line to improve linker performance main.obj : error LNK2001: unresolved external symbol "class QObject * __cdecl qt _plugin_instance_qjpeg(void)" (?qt_plugin_instance_qjpeg@@YAPAVQObject@@XZ) ......\bin\assistant_adp.exe : fatal error LNK1120: 1 unresolved externals NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN \link.EXE"' : return code '0x460' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN \nmake.exe"' : return code '0x2' Stop. NMAKE : fatal error U1077: 'cd' : return code '0x2' Stop. NMAKE : fatal error U1077: 'cd' : return code '0x2' Stop. NMAKE : fatal error U1077: 'cd' : return code '0x2' Stop. Thank you in deed.

    Read the article

  • Linux PDF/Postscript Optimizing

    - by Sheldon Ross
    So I have a report system built using Java and iText. PDF templates are created using Scribus. The Java code merges the data into the document using iText. The files are then copied over to a NFS share, and a BASH script prints them. I use acroread to convert them to PS, then lpr the PS. The FOSS application pdftops is horribly inefficient. My main problem is that the PDF's generated using iText/Scribus are very large. And I've recently run into the problem where acroread pukes because it hits 4gb of mem usage on large (300+ pages) documents. (Adobe is painfully slow at updating stuff to 64 bit). Now I can use Adobe reader on Windows, and use the Create Print PDF option or whatever its called, and it greatly( 10x) reduces the size of the PDF(it removes alot of metadata about form fields and such it appears) and produces a PDF that is basically a Print image. My question is does anyone know of a good solution/program for doing something similiar on Linux. Ideally, it would optimize the PDF, reduce size, and reduce PS complexity so the printer could print faster as it takes about 15-20 seconds a page to print right now.

    Read the article

  • plugin from github not successfully installing

    - by JohnMerlino
    Hey all, I tried to install the highcharts-rails plugin from github as specified in the instructions: Installation Get the plugin: script/plugin install git://github.com/loudpixel/highcharts-rails.git Run the rake setup: rake highcharts_rails:install But when I run the script/plugin install... It installs a couple of files only and not all the required files, I presume, because when I run rake highcharts_rails:install I get the following: rake aborted! Don't know how to build task 'highcharts_rails:install' All it installed for me was: jquery.js jrails.js jquery-ui.js I noticed on the site http://github.com/loudpixel/highcharts-rails It has all this: file MIT-LICENSE February 08, 2010 Initial commit [abbottry] file README.md February 09, 2010 Added installation section to README [jsiarto] file Rakefile February 08, 2010 Initial commit [abbottry] directory generators/ February 08, 2010 Initial commit [abbottry] file init.rb February 08, 2010 Initial commit [abbottry] directory javascripts/ February 08, 2010 Added jquery 1.3.2 script [abbottry] directory lib/ February 08, 2010 Initial commit [abbottry] directory tasks/ February 08, 2010 Incorrect path to plugin for rake task [abbottry] directory test/ February 08, 2010 Initial commit [abbottry] file uninstall.rb February 08, 2010 Initial commit [abbottry] So I'm not sure what I'm doing wrong to not get these files installed properly. Thanks for any response.

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Service Oriented Architecture & Domain-Driven Design

    - by Michael
    I've always developed code in a SOA type of way. This year I've been trying to do more DDD but I keep getting the feeling that I'm not getting it. At work our systems are load balanced and designed not to have state. The architecture is: Website ===Physical Layer== Main Service ==Physical Layer== Server 1/Service 2/Service 3/Service 4 Only Server 1,Service 2,Service 3 and Service 4 can talk to the database and the Main Service calls the correct service based on products ordered. Every physical layer is load balanced too. Now when I develop a new service, I try to think DDD in that service even though it doesn't really feel like it fits. I use good DDD principles like entities, value types, repositories, aggregates, factories and etc. I've even tried using ORM's but they just don't seem like they fit in a stateless architecture. I know there are ways around it, for example use IStatelessSession instead of ISession with NHibernate. However, ORM just feel like they don't fit in a stateless architecture. I've noticed I really only use some of the concepts and patterns DDD has taught me but the overall architecture is still SOA. I am starting to think DDD doesn't fit in large systems but I do think some of the patterns and concepts do fit in large systems. Like I said, maybe I'm just not grasping DDD or maybe I'm over analyzing my designs? Maybe by using the patterns and concepts DDD has taught me I am using DDD? Not sure if there is really a question to this post but more of thoughts I've had when trying to figure out where DDD fits in overall systems and how scalable it truly is. The truth is, I don't think I really even know what DDD is?

    Read the article

  • YSlow Grade F on Add Expires headers - help please

    - by gwmbox
    I am using Joomla for my site and I have included Expires Headers in my htaccess file, however when checking the site via YSlow the grade is still F, the code in the htaccess file for this is <IfModule mod_expires.c> # Enable expiration control ExpiresActive On # Default expiration: Immediate after request ExpiresDefault "now" # CSS and JS expiration: 1 week after request ExpiresByType text/css "now plus 1 week" ExpiresByType application/javascript "now plus 1 week" ExpiresByType application/x-javascript "now plus 1 week" # Image files expiration: 1 month after request ExpiresByType image/bmp "now plus 1 month" ExpiresByType image/gif "now plus 1 month" ExpiresByType image/jpeg "now plus 1 month" ExpiresByType image/jp2 "now plus 1 month" ExpiresByType image/pipeg "now plus 1 month" ExpiresByType image/png "now plus 1 month" ExpiresByType image/svg+xml "now plus 1 month" ExpiresByType image/tiff "now plus 1 month" ExpiresByType image/vnd.microsoft.icon "now plus 1 month" ExpiresByType image/x-icon "now plus 1 month" ExpiresByType image/ico "now plus 1 month" ExpiresByType image/icon "now plus 1 month" ExpiresByType text/ico "now plus 1 month" ExpiresByType application/ico "now plus 1 month" ExpiresByType image/vnd.wap.wbmp "now plus 1 month" ExpiresByType application/vnd.wap.wbxml "now plus 1 month" ExpiresByType application/smil "now plus 1 month" # Audio files expiration: 1 month after request ExpiresByType audio/basic "now plus 1 month" ExpiresByType audio/mid "now plus 1 month" ExpiresByType audio/midi "now plus 1 month" ExpiresByType audio/mpeg "now plus 1 month" ExpiresByType audio/x-aiff "now plus 1 month" ExpiresByType audio/x-mpegurl "now plus 1 month" ExpiresByType audio/x-pn-realaudio "now plus 1 month" ExpiresByType audio/x-wav "now plus 1 month" # Movie files expiration: 1 month after request ExpiresByType application/x-shockwave-flash "now plus 1 month" ExpiresByType x-world/x-vrml "now plus 1 month" ExpiresByType video/x-msvideo "now plus 1 month" ExpiresByType video/mpeg "now plus 1 month" ExpiresByType video/mp4 "now plus 1 month" ExpiresByType video/quicktime "now plus 1 month" ExpiresByType video/x-la-asf "now plus 1 month" ExpiresByType video/x-ms-asf "now plus 1 month" # webfonts ExpiresByType font/truetype "access plus 1 month" ExpiresByType font/opentype "access plus 1 month" ExpiresByType application/x-font-woff "access plus 1 month" ExpiresByType image/svg+xml "access plus 1 month" ExpiresByType application/vnd.ms-fontobject "access plus 1 month" </IfModule> Can someone please tell me why it is not being graded by Yslow? Thanks GW

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • Obtaining information about executable code from exe/pdb

    - by Miro Kropacek
    Hello, I need to extract code (but not data!) from classic win32 exe/dll files. It's clear I can't do this only with extraction of code segment content (because code segment contains also the data -- jump tables for example) and that I need some help from compiler. *.map files are nice but they only contain addresses of functions, i.e. the safest thing I can do is to start at that address and to process until the first return / jump instruction (because part of the function could be mentioned data) *.pdb files are better but I'm not sure what tools to use to extract information like this -- I took a look at DbgHelp and DIA SDK, the latter one seems to be the right tool but it doesn't look very simple. So my question/questions: To your knowledge, it is possible to extract information about code/data position (address + length) only via DbgHelp? If the DIA SDK is the only way, any idea what should I call for getting information like that? (that COM stuff is pretty heavy) Is there any other way? Of course my concern is about Visual Studio, C/C++ source compilation in the first place. Thanks for any hint.

    Read the article

  • Windows Azure DownloadtoStream EOF error

    - by david
    Hello, I've been using Windows Azure to create a document management system, and things have gone well so far. I've been able to upload and download files to the BLOB storage through an asp.net front end. What I'm attempting to do now is allow users to upload a .zip file, and then take the files out of that .zip and save them as individual files. Problem is, I'm getting "ZipException was unhandled" "EOF in header" and I don't know why. I'm using the ICSharpCode.SharpZipLib library which I've used for many other tasks and its worked great. Here's the basic code: CloudBlob ZipFile = container.GetBlobReference(blobURI); MemoryStream MemStream = new MemoryStream(); ZipFile.DownloadToStream(MemStream); .... while ((theEntry = zipInput.GetNextEntry()) != null) and it's on the line that starts with while that I get the error. I added a sleep duration of 10 seconds just to make sure enough time had gone by. MemStream has a length if I debug it, but the zipInput does sometimes, but not always. It always fails. Thanks for any help. Dave

    Read the article

  • JSF Render response programmatically

    - by Shamik
    I have one parent page with a parentManagedBean (attached to Session Scope). On click of a button on this parent page, one popup comes which has a childManagedBean (attached to Request scope). Now ChildManagedBean holds a reference to parentManaged bean through JSF's managed property facility. On this popup window, user selects some option which populates a large value object. I use the managed property of childManagedBean to set the values from this large object to that of parentManagedBean. Problem is - The parent page shows a link, on click of which a popup comes, on selection of the popup, the popup disappears and set the values to the parentManaged bean. So far so good, but the newly set values need to appear on the parent page. This is where I am stuck. How to programmatically render the master page/render page when I am at the child managed bean? Is there a way I can get handle of the parent page and refresh it? Note: I'm using JSF 1.1 EDIT- After following the solution of "resubmit-ing the form" from javascript, I am seeing that the old form is getting resubmitting which overwrites all of my changed values.

    Read the article

  • How to generate comments in hbm2java created POJO?

    - by jschoen
    My current setup using hibernate uses the hibernate.reveng.xml file to generate the various hbm.xml files. Which are then turned into POJOs using hbm2java. We spent some time while designing our schema, to place some pretty decent descriptions on the Tables and there columns. I am able to pull these descriptions into the hbm.xml files when generating them using hbm2jhbmxml. So I get something similar to this: <class name="test.Person" table="PERSONS"> <comment>The comment about the PERSONS table.</comment> <property name="firstName" type="string"> <column name="FIRST_NAME" length="100" not-null="true"> <comment>The first name of this person.</comment> </column> </property> <property name="middleInitial" type="string"> <column name="MIDDLE_INITIAL" length="1"> <comment>The middle initial of this person.</comment> </column> </property> <property name="lastName" type="string"> <column name="LAST_NAME" length="100"> <comment>The last name of this person.</comment> </column> </property> </class> So how do I tell hbm2java to pull and place these comments in the created Java files? I have read over this about editing the freemarker templates to change the way code is generated. I under stand the concept, but it was not to detailed about what else you could do with it beyond there example of pre and post conditions.

    Read the article

  • Visual studio not detecting that exe is out of date after perforce revert

    - by CHaskell2
    This is a bit of an odd situation. Here's what's happening. So, we have a VS2008 project which outputs to a number of files under perforce control. These files have the always writable flag set. I compile the project in VS, which gives me up to date binaries on my machine. If I then revert those binaries via perforce, I have the version of the binaries that were up on perforce (ie, old ones.) Despite this, compiling the project again at this point detects no changes and will not remake those binaries. In a way, this makes sense, since none of the code or obj files have changed, but it's not really what I want to happen. This comes up in an edge case on our automated build server. I can think of tons of different little hacks I could do to fix this, but I'm thinking I could be missing something fundamental here. The actual build process uses the Unreal build tool, so there is a bit of magic going on behind the scenes that I'm not entirely familiar with too. Edit: This is a C/C++ project, forgot to mention that.

    Read the article

  • Understanding how rpmbuild works

    - by ereOn
    Hi, For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5". I'm used to Debian and it's derivative (Ubuntu, and so on) and thus to Debian packages (aka. .deb files). It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed to just create one. If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does he want to copy my files into /usr/bin at this step ?! I just want to create the package, not install it ! I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why ? Thank you very much !

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • Html layout with <DIV> work on html editor but not on brownser

    - by DomingoSL
    Hello, i made this layout: <div id="todo" align="center" > <form method="post"> <div id="cabeza" style="width:850px;height:100px"> </div> <div id="contenido" style="width:420px;height:220px;background-image: url(IMG/cuadrologin.png); margin-top: 1px" > <div id="usuario" style="width:348px; height:35px; margin-top: 58px"> <input name="username" type="text" style="width: 250px; height: 30px;background-color: transparent;border: 0px solid #000000;font-size:x-large;color: #222; font-family: 'Trebuchet MS', 'Lucida Sans Unicode', 'Lucida Grande', 'Lucida Sans', Arial, sans-serif; font-weight: bold;" size="299" /> </div> <div id="clave" style="width:348px; height:35px; margin-top: 22px"> <input name="clave" type="text" style="width: 250px; height: 30px;background-color: transparent;border: 0px solid #000000;font-size:x-large;color: #222; font-family: 'Trebuchet MS', 'Lucida Sans Unicode', 'Lucida Grande', 'Lucida Sans', Arial, sans-serif; font-weight: bold;" size="299" /> </div> </div> </form> </div> And in my html editor looks just fine: But when i see it on the browser (Chrome & Firefox) looks like this: Im very new to layout with tag, any idea of what im making worng?

    Read the article

  • How to convert an InputStream to a DataHandler?

    - by pcorey
    I'm working on a java web application in which files will be stored in a database. Originally we retrieved files already in the DB by simply calling getBytes on our result set: byte[] bytes = resultSet.getBytes(1); ... This byte array was then converted into a DataHandler using the obvious constructor: dataHandler=new DataHandler(bytes,"application/octet-stream"); This worked great until we started trying to store and retrieve larger files. Dumping the entire file contents into a byte array and then building a DataHandler out of that simply requires too much memory. My immediate idea is to retrieve a stream of the data in the database with getBinaryStream and somehow convert that InputStream into a DataHandler in a memory-efficient way. Unfortunately it doesn't seem like there's a direct way to convert an InputStream into a DataHandler. Another idea I've been playing with is reading chunks of data from the InputStream and writing them to the OutputStream of the DataHandler. But... I can't find a way to create an "empty" DataHandler that returns a non-null OutputStream when I call getOutputStream... Has anyone done this? I'd appreciate any help you can give me or leads in the right direction.

    Read the article

  • Difference in performance between Stax and DOM parsing

    - by Fazal
    I have been using DOM for a long time and as such DOM parsing performance wise has been pretty good. Even when dealing with XML of about 4-7 MB the parsing has been fast. The issue we face with DOM is the memory footprint which become huge as soon as we start dealing with large XMLs. Lately I tried moving to Stax (Streaming parsers for XML) which are supposed top be second generation parsers (reading about Stax it said its the fastest parser now). When I tried stax parser for large XML for about 4MB memory footprint definitely reduced drastically but time take to parse entire XML and create java object out of it increased almost by 5 times over DOM. I used sjsxp.jar implementation of Stax. I can deuce to some extent logically that performance may not be extremely good due to streaming nature of the parser but a reduction of 5 time (e.g. DOM takes about 8 seconds to build object for this XML, whereas Stax parsing took about 40 seconds on average) is definitely not going to be acceptable. Am I missing some point here completely as I am not able to come to terms with these performance numbers

    Read the article

  • What Is The Proper Location For One-Offs In VCS Repos?

    - by Joe Clark
    I have recently started using Mercurial as our VCS. Over the years, I have used RCS, CVS, and - for the last 5 years - SVN. Back 13 years ago, when I primarily used CVS and RCS, large projects went into CVS and one-offs were edited in place on the specific server and stored in RCS. This worked well as the one-offs were usually specific to the server and the servers were backed up nightly. Jump forward a decade and a lot of the one-off scripts became less centralized - they might be needed on any server at some random time. This was also OK, because now I was a begrudging SVN user. Everything (except for docs) got dumped into one repo. Jump to 2010. Now I am using Mercurial and am putting large projects in their own repo again. But what to do with the one-offs? The options as I see them: A repo for each script. It seems a bit cluttered to create a repo for every one page script that might get ran once a year. RCS Not an option. There are many possible servers that might need a specific script. Continuing to use SVN just for one-offs. No. There no advantage I see over the next option. Create a repo in Mercurial named "one-offs". This seems the most workable. The last option seems the best to me - however; is there a best practice regarding this? You also might be wondering if these scripts are truly one-offs if they will be reused. Some of them may be reused 6 months or a year from now - some, never. However, nearly all of them involve several man-hours of work due to either complex logic or extensive error checking. Simply discarding them is not efficient.

    Read the article

  • Need advice on OOP philosophy

    - by David Jenings
    I'm trying to get the wheels turning on a large project in C#. My previous experience is in Delphi, where by default every form was created at applicaton startup and form references where held in (gasp) global variables. So I'm trying to adapt my thinking to a 100% object oriented environment, and my head is spinning just a little. My app will have a large collection of classes Most of these classes will only really need one instance. So I was thinking: static classes. I'm not really sure why, but much of what I've read here says that if my class is going to hold a state, which I take to mean any property values at all, I should use a singleton structure instead. Okay. But there are people out there who for reasons that escape me, think that singletons are evil too. None of these classes is in danger of being used anywhere except in this program. So they could certainly work fine as regular objects (vs singletons or static classes) Then there's the issue of interaction between objects. I'm tempted to create a Global class full of public static properties referencing the single instances of many of these classes. I've also considered just making them properties (static or instance, not sure which) of the MainForm. Then I'd have each of my classes be aware of the MainForm as Owner. Then the various objects could refer to each other as Owner.Object1, Owner.Object2, etc. I fear I'm running out of electronic ink, or at least taxing the patience of anyone kind enough to have stuck with me this long. I hope I have clearly explained my state of utter confusion. I'm just looking for some advice on best practices in my situation. All input is welcome and appreciated. Thanks in advance, David Jennings

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Adapting existing HTML/Javascript model to Titanium's latest release (v 0.9)

    - by Alan Neal
    In pre-0.9 versions of Titanium, one could simply specify an .html file (local or remote) in the tiapp.xml file and interact with it in the same manner as one would on a website. As of version 0.9, that is no the longer case. One creates their entire app dynamically. Unfortunately, this broke my previous implementation and, other than an updated Kitchen Sink, much of the new model and API calls are not covered in the documentation (e.g., createLabel). So, my question is this... What are the simplest steps for re-creating the previous effect (knowingly forgoing some of the advantages of the Titanium's latest approach if necessary)? My previous implementation was exactly as it functions on the website. The website has a single index.html file with no content other than links to JavaScript and style files. The document body's onload event called the first JavaScript function (located in the main script) and, from that point forth, the entire content was dynamically created. How can I set up the latest version of Titanium so that I am poised to do the exact same thing? BTW: Whereas I previously had the choice to keep the files local or remote, I don't believe that remote access (e.g., simply using the webView widget to point to the website) is viable. That's because pages displayed via the webView do not have access to most of the API. Since the iPhone and Safari browsers do not support the file input type, the only means for uploading files (something my app requires) is calling Titanium's function. Thanks in advance.

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • PHP gettext function only returns orignal untranslated string

    - by Camsoft
    I'm trying to use gettext add localisation support to my website. I've followed various guides on how to setup gettext and have done the following: I've created the following files and directories in the root of my project dir: test.php locale/ de_DE LC_MESSAGES messages.mo messages.po en_GB LC_MESSAGES messages.mo messages.po I've used Poedit to create the above .po and mo files. I've made sue it use Unix line endings, UTF-8 and set the language and country accordingly. I've then created a PHP script called test.php which has the following code: <?php define('LOCALE', 'de_DE'); // Set up environmental variables putenv("LC_ALL=" . LOCALE); setlocale(LC_ALL, LOCALE); bindtextdomain("messages", "./locale"); bind_textdomain_codeset("messages", LOCALE .".utf8"); textdomain("messages"); die(gettext('This is a test.')); ?> I've imported the text "This is a test." to Poedit and supplied the translation and saved it. But for some reason the test.php script will only output the original text untranslated. It refuses to load the version for the translation files. It's worth noting that the server is running Linux (Ubuntu), Apache 2.2.11 and PHP 5.2.6-3ubuntu4.5. I've checked phpinfo() and gettext is enabled. Can someone help me? Thanks.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >