Search Results

Search found 10417 results on 417 pages for 'large'.

Page 75/417 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • CSS call from code behind not working

    - by SmartestVEGA
    I have the following entries in the css file. a.intervalLinks { font-size:11px; font-weight:normal; color:#003399; text-decoration:underline; margin:0px 16px 0px 0px; } a.intervalLinks:link { text-decoration:underline; } a.intervalLinks:hover { text-decoration:none; } a.intervalLinks:visited { text-decoration:underline; } a.selectedIntervalLink { font-size:12px; font-weight:bold; color:#003399; text-decoration:none; margin:0px 16px 0px 0px; } a.intervalLinks:active { text-decoration:underline; font-size:large ; } Whenever i take the click on some links (not shown) which is embedded in the webpage ..i can see the change in the link a.intervalLinks:active { text-decoration:underline; font-size:large ; (the font of the link will become large) but after clicking the page refreshes ..the changes will go away i want to keep the change for ever in that link ...even there is a page refresh i understood that ..this can achieved only throughg the code behind of asp.net Following code should work:but unfortunately its not ..could anyone help? protected override void OnInit(EventArgs e) { rptDeptList.ItemDataBound += new RepeaterItemEventHandler(rptDeptList_ItemDataBound); } void rptDeptList_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.DataItem == null) return; LinkButton btn = (LinkButton)e.Item.FindControl("LinkButton1"); btn.Attributes.Add("class", "intervalLinks"); } Current html code for links has been shown below : <ItemTemplate> <div class='dtilsDropListTxt'><div class='rightArrow' ></div> <asp:LinkButton ID="LinkButton1" runat="server" Text=<%#DataBinder.Eval(Container.DataItem, "WORK_AREA")%> CssClass="intervalLinks" OnClick="LinkButton1_Click" ></asp:LinkButton> </div> </ItemTemplate> Could anyone help?

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • jQuery/Ajax IE7 - Long requests fail

    - by iQ
    Hi guys, I have a problem with IE7 regarding an ajax call that is made by jQuery.load function. Basically the request works in cases where the URL string is not too long, but as soon as the URL gets very large it fails. Doing some debugging on the Ajax call I found this error: URL: <blanked out security reasons but it's very long> Content Type: Headers size (bytes): 0 Data size (bytes): 0 Total size (bytes): 0 Transferred data size (bytes): 0 Cached data: No Error result: 0x800c0005 Error constant: INET_E_RESOURCE_NOT_FOUND Error description: The server or proxy was not found Extended error result: 0x7a Extended error description: The data area passed to a system call is too small. As you can see, it looks like nothing is being sent. Now this only happens on IE7 but not other browsers, with IE8 there is a small delay but still works. The same request works fine when the URL string is relatively small. Now I need this working on IE7 for compatibility reasons and I cannot find workarounds for this so any help is greatly appreciated. The actual ajax call is like this: $("ID").load("url?lotsofparams",callbac func(){}); "lotsofparams" can vary, sometimes being small or very large. It's when the string is very large that I get the above error for IE7 only.

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • jquery displaying table rows dynamically in dom belonging to one/more particular class

    - by whatf
    I have the basic html structure: <table> <tbody> <tr class="1"> <td> <p style="font-size:large"> <span class="muted"> This is the first object </span> </p> </td> </tr> <tr class="2"> <td> <p style="font-size:large"> <span class="muted"> This is second object </span> </p> </td> </tr> <tr class="3"> <p style="font-size:large"> <span class="muted"> this is the third object </span> </p> </td> </tr> </tbody> </table> and then I have check boxes, the functionality i want is, if checkbox 1 is checked, only the tr with class 1 be displayed. if checkbox 2 and 3 are clicked, the tr with class 1 gets hidden and 2, 3 show in the dom. again if checkbox 2,*3* are unchecked and 1 is checked tr with class 2, 3 do not show and 1 is showed. How can this be done in jQuery?

    Read the article

  • jQuery - Show id, based on selected items class?

    - by Jon Hadley
    I have a layout roughly as follows: <div id="foo"> <!-- a bunch of content --> </div> <div id="thumbnails"> <div class="thumb-content1"></div> <div class="thumb-content2"></div> <div class="thumb-content3"></div> </div> <div id="content-1"> <!-- some text and pictures, including large-pic1 --> </div> <div id="content-2"> <!-- some text and pictures, including large-pic2 --> </div> <div id="content-3"> <!-- some text and pictures, including large-pic3 --> </div> etc .... On page load I want to show 'foo' and 'thumbnails' and hide the three content divs. As the user clicks each thumbnail, I want to hide foo, and replace it with the matching 'content-x'. I can get my head round jQuery show, hide and replace (although, bonus points if you want to include that in your example!). But how would I extract and construct the appropriate content id, from the thumbnail class, then pass it to the show hide code?

    Read the article

  • form validation with jquery and livevalidation

    - by ImpY
    I'm trying to do some form validation with livevaldation & jquery. I've a formular with an input field like that: <div id="prenameDiv" class="control-group"> `<input id="prename" name="prename" class="input-large" placeholder="Max" >` </div> So if there's an error on validation 'livevalidaton' adds the class 'LV_invalid_field' to the input - it looks like that: <div id="prenameDiv" class="control-group"> <input id="prename" name="prename" class="input-large LV_invalid_field" placeholder="Max" > </div> That's ok, but now I'll add another class 'error' to the div 'prenameDiv' when the DOM changes that it looks like that: <div id="prenameDiv" class="control-group error"> `<input id="prename" name="prename" class="input-large LV_invalid_field" placeholder="Max" ` </div> I tried it that way: if ($("#prenameDiv").bind("DOMSubtreeModified")){ `if ($("#prename").hasClass("LV_invalid_field")) {` $("#prenameDiv").addClass("error"); } } But nothing changes? Do you have some ideas?

    Read the article

  • How can I create an array of random numbers in C++

    - by Nick
    Instead of The ELEMENTS being 25 is there a way to randomly generate a large array of elements....10000, 100000, or even 1000000 elements and then use my insertion sort algorithms. I am trying to have a large array of elements and use insertion sort to put them in order and then also in reverse order. Next I used clock() in the time.h file to figure out the run time of each algorithm. I am trying to test with a large amount of numbers. #define ELEMENTS 25 void insertion_sort(int x[],int length); void insertion_sort_reverse(int x[],int length); int main() { clock_t tStart = clock(); int B[ELEMENTS]={4,2,5,6,1,3,17,14,67,45,32,66,88, 78,69,92,93,21,25,23,71,61,59,60,30}; int x; cout<<"Not Sorted: "<<endl; for(x=0;x<ELEMENTS;x++) cout<<B[x]<<endl; insertion_sort(B,ELEMENTS); cout <<"Sorted Normal: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; insertion_sort_reverse(B,ELEMENTS); cout <<"Sorted Reverse: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; double seconds = clock() / double(CLK_TCK); cout << "This program has been running for " << seconds << " seconds." << endl; system("pause"); return 0; }

    Read the article

  • Throttle network bandwidth per application in Mac OS X

    - by Kio Dane
    I notice that iTunes seems to suck up all my bandwidth and doesn’t play nice with other applications that use the web when it's downloading. In fact, it doesn't even give itself enough bandwidth when browsing the iTunes Store while downloading large or many files (podcasts, TV shows, large apps, etc). I'm not concerned with getting all my downloads as soon as possible, they're really low priority, and I'd rather not have to do this while I'm awake, but I can't hit the refresh button if I'm in bed and forgot it already. Is there an application or tool via the Terminal to limit the download bandwidth that iTunes gets without also hindering web browsers or other applications? FOSS/GPL software is preferable, but pay software might be acceptable too.

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • MOSS Upload Size?

    - by littlegeek
    Hi In MOSS we have done all of this to increase the file upload size, reset iis but still doesnt want to play = anyone any advice. UPDATE Just seen the Scott Gu article - http://weblogs.asp.net/pscott/archive/2009/02/26/404-errors-with-fileupload-with-iis7.aspx and JS example http://msmvps.com/blogs/cgross/archive/2009/02/25/large-files-in-sbs-2008-s-companyweb.aspx So need to say this is II6 on win2k3 Update 2 still not working at a loss anyone help? In SharePoint 3.0 Central Adminisration, Application Management tab, and Web application general settings configure the Maximum upload size to a maximum of 2047 MB. - We set ours to 250MB In Internet Information Services on the properties of the virtual server increase the Connection Timeout to greater than default 120 seconds depending on the time to upload large files in your environment for example, 360 seconds. - We set ours to 600 Secs Configure the web.config for the _layouts web.config with On the SharePoint server change C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS\web.config with an executionTimeout appropriate for the file size you are uploading. An example is included below. from: to: executionTimeout="999999" maxRequestLength="2097151" /

    Read the article

  • What is a ‘best practice’ backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • Converting a GMail Label Mailbox to a Set of PDFs

    - by Aldrin Leal
    I do have a rather large task to do: I need to convert a folder in my gmail with lots of tagged messages either as a large PDF (which Adobe Acrobat does on Outlook - Except the latter crashes while loading this mailbox) or as a individual PDF (which I plan to link on a Wiki Database) While it doesn't fully need to be in PDF (as long as I can, say, outsource to someone else to convert each .eml file as a .pdf file), I need to have them splitted so I could cross-reference them on a Wiki. What would you suggest to accomplish this task? Thank you.

    Read the article

  • disparity between `top`'s given CPU % and process CPU usage total

    - by intuited
    I've noticed that there are sometimes (large) differences between the reported total CPU usage and a summation of the per-process CPU utilization given by apps like top and wmtop. As an example: I recently ran a git filter-branch --index-filter on a fairly large repo, with the index-filter command piping git ls-files through a grep filter and into xargs git rm --cached. This took a few minutes to run; while it was going I noticed that both wmtop and top were displaying a high (above 50% on my 2-core machine) total CPU usage, but that neither showed any individual processes which were using a significant amount of CPU time. Are some processes not shown in the process list? What sorts of processes are these, and is there a way to find out how much CPU time they are using?

    Read the article

  • DNS Round-robin failover and load balancing

    - by Tom O'Connor
    Having read all of the questions and answers (1 2 3 and so on) on here relating to DNS load balancing, and Round-robin DNS, there's still a number of unanswered questions.. Large companies, and I'm looking at Google, Facebook and Twitter here, do present multiple A records. 1) If DNS loadbalancing/failover is so dodgy, why do large organisations do it? There seems to be very little mention of "DNS Pinning", despite this (PDF) paper about it. 2) Why is DNS Pinning so seldom mentioned? 3) Are there any concrete examples of which ISPs and so on actually do rewrite DNS TTLs? That said, I'm not entirely backing the side for using DNS for failover or any form of load balancing. For most networks, BGP diverse routing still seems to be a better fit. DNS rears it's ugly head again. :(

    Read the article

  • Vim is spellchecking in XML files where I don't want it to, and only there

    - by Kazark
    I'm trying to use Vim's builtin spellchecking in some XML documents. This happens merely by having the XML syntax loaded, as seen in the following minimalistic example (which reproduces what I also see in large XML documents): Note that given two buffers with exactly the same content, when Filetype is text, the spellchecking works; when it is xml, it does not. spell is set in both buffers. However, given this view of the top three lines of a large XML document, you can see that the spellchecking is certainly on: but it is only checking attributes. The nuisance is that none of the things it is actually finding are mispelled, and it isn't finding any of the numerous misspellings in the document. I would like it at a minimum to find the spelling errors in the body of the document, and being able to turn off the checking on attributes would be a nice option. I've searched for @NoSpell in the xml.vim file, but that returns no hits.

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Copying files from NAS to NAS drives

    - by user1001421
    Very simple question. I've got 2 NAS drive that are "wire" connected via a router. If I have a wireless laptop and request a large amount of data be copied from one NAS drive to the other, does the network traffic go direct from the one drive to the other, going via the wired network, or does the network traffic go via my laptop, if you see what I mean. IE. From the NAS drives wired network, to the wireless network and then back to the wired network. Is this a common bottle-neck when copying a large amount of data? And if so, is there a way to avoid it. Thanks.

    Read the article

  • When Citrix desktop disconnects SAP Client holds session and can't log back in

    - by Stradas
    We have a fairly large Citrix implementation and have just pushed out SAP desktop client to all of the desktops. Everything else is working fine except the following problem: If a user Disconnects their session and the session is running the SAP client, (logging off works fine) the user can not reconnect and log back in. We have a script on the server that terminates the session as a work around. We can see on the server that it is the SAP client that is holding on and running. This is at a large office, but the SAP servers are in another hemisphere. As is the custom Citrix says its SAP and SAP says it is Citrix. I don't like using a powershell script as a substitute for a system configuration solution.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >