Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 135/457 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Remove duplicates from a list of nested dictionaries

    - by user2924306
    I'm writing my first python program to manage users in Atlassian On Demand using their RESTful API. I call the users/search?username= API to retrieve lists of users, which returns JSON. The results is a list of complex dictionary types that look something like this: [ { "self": "http://www.example.com/jira/rest/api/2/user?username=fred", "name": "fred", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=fred", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=fred", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=fred", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=fred" }, "displayName": "Fred F. User", "active": false }, { "self": "http://www.example.com/jira/rest/api/2/user?username=andrew", "name": "andrew", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=andrew", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=andrew", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=andrew", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=andrew" }, "displayName": "Andrew Anderson", "active": false } ] I'm calling this multiple times and thus getting duplicate people in my results. I have been searching and reading but cannot figure out how to deduplicate this list. I figured out how to sort this list using a lambda function. I realize I could sort the list, then iterate and delete duplicates. I'm thinking there must be a more elegant solution. Thank you!

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • Linux PDF/Postscript Optimizing

    - by Sheldon Ross
    So I have a report system built using Java and iText. PDF templates are created using Scribus. The Java code merges the data into the document using iText. The files are then copied over to a NFS share, and a BASH script prints them. I use acroread to convert them to PS, then lpr the PS. The FOSS application pdftops is horribly inefficient. My main problem is that the PDF's generated using iText/Scribus are very large. And I've recently run into the problem where acroread pukes because it hits 4gb of mem usage on large (300+ pages) documents. (Adobe is painfully slow at updating stuff to 64 bit). Now I can use Adobe reader on Windows, and use the Create Print PDF option or whatever its called, and it greatly( 10x) reduces the size of the PDF(it removes alot of metadata about form fields and such it appears) and produces a PDF that is basically a Print image. My question is does anyone know of a good solution/program for doing something similiar on Linux. Ideally, it would optimize the PDF, reduce size, and reduce PS complexity so the printer could print faster as it takes about 15-20 seconds a page to print right now.

    Read the article

  • JSF Render response programmatically

    - by Shamik
    I have one parent page with a parentManagedBean (attached to Session Scope). On click of a button on this parent page, one popup comes which has a childManagedBean (attached to Request scope). Now ChildManagedBean holds a reference to parentManaged bean through JSF's managed property facility. On this popup window, user selects some option which populates a large value object. I use the managed property of childManagedBean to set the values from this large object to that of parentManagedBean. Problem is - The parent page shows a link, on click of which a popup comes, on selection of the popup, the popup disappears and set the values to the parentManaged bean. So far so good, but the newly set values need to appear on the parent page. This is where I am stuck. How to programmatically render the master page/render page when I am at the child managed bean? Is there a way I can get handle of the parent page and refresh it? Note: I'm using JSF 1.1 EDIT- After following the solution of "resubmit-ing the form" from javascript, I am seeing that the old form is getting resubmitting which overwrites all of my changed values.

    Read the article

  • Service Oriented Architecture & Domain-Driven Design

    - by Michael
    I've always developed code in a SOA type of way. This year I've been trying to do more DDD but I keep getting the feeling that I'm not getting it. At work our systems are load balanced and designed not to have state. The architecture is: Website ===Physical Layer== Main Service ==Physical Layer== Server 1/Service 2/Service 3/Service 4 Only Server 1,Service 2,Service 3 and Service 4 can talk to the database and the Main Service calls the correct service based on products ordered. Every physical layer is load balanced too. Now when I develop a new service, I try to think DDD in that service even though it doesn't really feel like it fits. I use good DDD principles like entities, value types, repositories, aggregates, factories and etc. I've even tried using ORM's but they just don't seem like they fit in a stateless architecture. I know there are ways around it, for example use IStatelessSession instead of ISession with NHibernate. However, ORM just feel like they don't fit in a stateless architecture. I've noticed I really only use some of the concepts and patterns DDD has taught me but the overall architecture is still SOA. I am starting to think DDD doesn't fit in large systems but I do think some of the patterns and concepts do fit in large systems. Like I said, maybe I'm just not grasping DDD or maybe I'm over analyzing my designs? Maybe by using the patterns and concepts DDD has taught me I am using DDD? Not sure if there is really a question to this post but more of thoughts I've had when trying to figure out where DDD fits in overall systems and how scalable it truly is. The truth is, I don't think I really even know what DDD is?

    Read the article

  • Html layout with <DIV> work on html editor but not on brownser

    - by DomingoSL
    Hello, i made this layout: <div id="todo" align="center" > <form method="post"> <div id="cabeza" style="width:850px;height:100px"> </div> <div id="contenido" style="width:420px;height:220px;background-image: url(IMG/cuadrologin.png); margin-top: 1px" > <div id="usuario" style="width:348px; height:35px; margin-top: 58px"> <input name="username" type="text" style="width: 250px; height: 30px;background-color: transparent;border: 0px solid #000000;font-size:x-large;color: #222; font-family: 'Trebuchet MS', 'Lucida Sans Unicode', 'Lucida Grande', 'Lucida Sans', Arial, sans-serif; font-weight: bold;" size="299" /> </div> <div id="clave" style="width:348px; height:35px; margin-top: 22px"> <input name="clave" type="text" style="width: 250px; height: 30px;background-color: transparent;border: 0px solid #000000;font-size:x-large;color: #222; font-family: 'Trebuchet MS', 'Lucida Sans Unicode', 'Lucida Grande', 'Lucida Sans', Arial, sans-serif; font-weight: bold;" size="299" /> </div> </div> </form> </div> And in my html editor looks just fine: But when i see it on the browser (Chrome & Firefox) looks like this: Im very new to layout with tag, any idea of what im making worng?

    Read the article

  • Need advice on OOP philosophy

    - by David Jenings
    I'm trying to get the wheels turning on a large project in C#. My previous experience is in Delphi, where by default every form was created at applicaton startup and form references where held in (gasp) global variables. So I'm trying to adapt my thinking to a 100% object oriented environment, and my head is spinning just a little. My app will have a large collection of classes Most of these classes will only really need one instance. So I was thinking: static classes. I'm not really sure why, but much of what I've read here says that if my class is going to hold a state, which I take to mean any property values at all, I should use a singleton structure instead. Okay. But there are people out there who for reasons that escape me, think that singletons are evil too. None of these classes is in danger of being used anywhere except in this program. So they could certainly work fine as regular objects (vs singletons or static classes) Then there's the issue of interaction between objects. I'm tempted to create a Global class full of public static properties referencing the single instances of many of these classes. I've also considered just making them properties (static or instance, not sure which) of the MainForm. Then I'd have each of my classes be aware of the MainForm as Owner. Then the various objects could refer to each other as Owner.Object1, Owner.Object2, etc. I fear I'm running out of electronic ink, or at least taxing the patience of anyone kind enough to have stuck with me this long. I hope I have clearly explained my state of utter confusion. I'm just looking for some advice on best practices in my situation. All input is welcome and appreciated. Thanks in advance, David Jennings

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • Difference in performance between Stax and DOM parsing

    - by Fazal
    I have been using DOM for a long time and as such DOM parsing performance wise has been pretty good. Even when dealing with XML of about 4-7 MB the parsing has been fast. The issue we face with DOM is the memory footprint which become huge as soon as we start dealing with large XMLs. Lately I tried moving to Stax (Streaming parsers for XML) which are supposed top be second generation parsers (reading about Stax it said its the fastest parser now). When I tried stax parser for large XML for about 4MB memory footprint definitely reduced drastically but time take to parse entire XML and create java object out of it increased almost by 5 times over DOM. I used sjsxp.jar implementation of Stax. I can deuce to some extent logically that performance may not be extremely good due to streaming nature of the parser but a reduction of 5 time (e.g. DOM takes about 8 seconds to build object for this XML, whereas Stax parsing took about 40 seconds on average) is definitely not going to be acceptable. Am I missing some point here completely as I am not able to come to terms with these performance numbers

    Read the article

  • What Is The Proper Location For One-Offs In VCS Repos?

    - by Joe Clark
    I have recently started using Mercurial as our VCS. Over the years, I have used RCS, CVS, and - for the last 5 years - SVN. Back 13 years ago, when I primarily used CVS and RCS, large projects went into CVS and one-offs were edited in place on the specific server and stored in RCS. This worked well as the one-offs were usually specific to the server and the servers were backed up nightly. Jump forward a decade and a lot of the one-off scripts became less centralized - they might be needed on any server at some random time. This was also OK, because now I was a begrudging SVN user. Everything (except for docs) got dumped into one repo. Jump to 2010. Now I am using Mercurial and am putting large projects in their own repo again. But what to do with the one-offs? The options as I see them: A repo for each script. It seems a bit cluttered to create a repo for every one page script that might get ran once a year. RCS Not an option. There are many possible servers that might need a specific script. Continuing to use SVN just for one-offs. No. There no advantage I see over the next option. Create a repo in Mercurial named "one-offs". This seems the most workable. The last option seems the best to me - however; is there a best practice regarding this? You also might be wondering if these scripts are truly one-offs if they will be reused. Some of them may be reused 6 months or a year from now - some, never. However, nearly all of them involve several man-hours of work due to either complex logic or extensive error checking. Simply discarding them is not efficient.

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Caching a column in a polymorphic relationship

    - by Brendon Muir
    I have content management system application that uses a polymorphic tree table as the core of its arrangement. I've come into a problem where once the tree grows quite large, and because we have quite a few different modules (about 25), just doing :include = :instance doesn't cut the mustard. Instance is the name of our polymorphic relationship. The funny part is that in most cases when I want a large list of these items, all I really want is their name from the associated table (for the purposes of an index bar for example), all the rest is in the central table. So I thought that I should probably implement some sort of column cache for the name in the central table. (Like a counter cache that rails already does). I was just wondering if a plugin exists to manage this already? If not, I was just going to add a 'name' column to the central table and because all the polymorphic models inherit off a superclass, just add a callback that pushes the name across to the central table whenever the item is created or updated. I'd then just do a big migration to populate it in the first place? Any flaws to that design? I suppose to be more flexible the column could be some kind of serialised cache where I could store other things later on if need be? Gah! :D

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • Bloated PDF created by TCPDF

    - by Yogi Yang 007
    In a web app developed in PHP we are generating Quotations and Invoices (which are very simple and of single page) using TCPDF lib. The lib is working just great but it seems to generate very large PDF files. For example in our case it is generating PDF files as large as 4 MB (+/- a few KB). How to reduce this bloating of PDF files generated by TCPDF? Here is code snippet that I am using ob_start(); include('quote_view_bag_pdf.php'); //This file is valid HTML file with PHP code to insert data from DB $quote = ob_get_contents(); //Capture the content of 'quote_view_bag_pdf.php' file and store in variable ob_end_clean(); //Code to generate PDF file for this Quote //This line is to fix a few errors in tcpdf $k_path_url=''; require_once('tcpdf/config/lang/eng.php'); require_once('tcpdf/tcpdf.php'); // create new PDF document $pdf = new TCPDF(); // remove default header/footer $pdf->setPrintHeader(false); $pdf->setPrintFooter(false); // add a page $pdf->AddPage(); // print html formated text $pdf->writeHtml($quote, true, 0, true, 0); //Insert Variables contents here. //Build Out File Name $pdf_out_file = "pdf/Quote_".$_POST['quote_id']."_.pdf"; //Close and output PDF document $pdf->Output($pdf_out_file, 'F'); $pdf->Output($pdf_out_file, 'I'); /////////////// enter code here Hope this code fragment will give some idea?

    Read the article

  • Positioning / Scrolling problem with Flex popup.

    - by user284163
    Hi all, I'm trying to work out a specific problem I'm having with positioning in Flex using the PopUpManager. Basically I'm wanting to create a popup which will scroll with the parent container - this is necessary because the parent container is large and if the user's browser window isn't large enough (this will be the case the majority of the time) - they will have to use the scrollbar of the container to scroll down. The problem is that the popup is positioned relative to another component, and it needs to stay by that component. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.core.UITextField; import mx.containers.TitleWindow; import mx.managers.PopUpManager; private function clickeroo(event:MouseEvent):void { var popup:TitleWindow = new TitleWindow(); popup.width = 250; popup.height = 300; popup.title = "Example"; var tf:UITextField = new UITextField(); tf.wordWrap = true; tf.width = popup.width - 30; tf.text = "This window stays put and doesn't scroll when the hbox is scrolled (even with using the hbox as parent in the addPopUp method), I need the popup to be local to the HBox."; popup.addChild(tf); PopUpManager.addPopUp(popup, hbox, false); PopUpManager.centerPopUp(popup); } ]]> </mx:Script> <mx:HBox width="100%" height="2000" id="hbox"> <mx:Button label="Click Me" click="clickeroo(event)"/> </mx:HBox> </mx:Application> Could anyone give me any pointers in the right direction? Thanks.

    Read the article

  • [Cocoa] Can't find leak in my code.

    - by ryyst
    Hi, I've been spending the last few hours trying to find the memory leak in my code. Here it is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; expression = [expression stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; // expression is an NSString object. NSArray *arguments = [NSArray arrayWithObjects:expression, [@"~/Desktop/file.txt" stringByExpandingTildeInPath], @"-n", @"--line-number", nil]; NSPipe *outPipe = [[NSPipe alloc] init]; NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/grep"]; [task setArguments:arguments]; [task setStandardOutput:outPipe]; [outPipe release]; [task launch]; NSData *data = [[outPipe fileHandleForReading] readDataToEndOfFile]; [task waitUntilExit]; [task release]; NSString *string = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF8StringEncoding]; string = [string stringByReplacingOccurrencesOfString:@"\r" withString:@""]; int linesNum = 0; NSMutableArray *possibleMatches = [[NSMutableArray alloc] init]; if ([string length] > 0) { NSArray *lines = [string componentsSeparatedByString:@"\n"]; linesNum = [lines count]; for (int i = 0; i < [lines count]; i++) { NSString *currentLine = [lines objectAtIndex:i]; NSArray *values = [currentLine componentsSeparatedByString:@"\t"]; if ([values count] == 20) [possibleMatches addObject:currentLine]; } } [string release]; [pool release]; return [possibleMatches autorelease]; I tried to follow the few basic rules of Cocoa memory management, but somehow there still seems to be a leak, I believe it's an array that's leaking. It's noticeable if possibleMatches is large. You can try the code by using any large file as "~/Desktop/file.txt" and as expression something that yields many results when grep-ing. What's the mistake I'm making? Thanks for any help! -- Ry

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • can a OOM be caused by not finding enough contiguous memory?

    - by raticulin
    I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace: at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209) at java.lang.String.<init>([CII)V (String.java:215) at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430) ... This comes from a large string I am copying. I remember reading somewhere (cannot find where) what happened is these cases is: process still has not consumed 1gb of memory, is way below even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM. I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any. The other possible culprit would be not enough permgen memory. Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >