Search Results

Search found 2417 results on 97 pages for 'mb'.

Page 70/97 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • iPhone SDK: My server doesn't support range header requests, does that mean it's impossible for me t

    - by Jessica
    I am currently developing an iPhone app, in which involves downloads of up to 300 mb. I have been told by my hosting service that my server does not support range header requests. However, when I download a file from my server using a download client, like safari download manager, resume options are available and work. Does this mean that they have a work around for servers that don't support range header requests and that I could possibly implement into my iPhone app? Or are they using a technique too complex to implement into the iPhone. If you know of a technique code samples will be greatly appreciated.

    Read the article

  • Internet explore is unresponsive while loading a large page

    - by kdhamane
    We have a html page being rendered in the browser (IE) that causes the browser to hang. The page is generated through server side script (ASP.NET and viewstate is disabled). The page while loading takes a long time (its not a b\w issue since we can reproduce it on local machine) and sometimes results in script unresponsive error. On debugging the issue we found that the html size on the client side is 4.73 MB. There's also a lot of DOM traversal (using JQuery) after document is ready (jquery-document.ready). After loading as well, the page simply hangs on any user interaction (scroll, mouseover) etc. A CPU usage spike (25-50% usage) is seen during loading and on any user interaction

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Uploadify and Image Compression

    - by Ilya Biryukov
    Hi, I am using Uploadify on one of my client's web sites to allow them to upload a large amount of pictures at once to their photo gallery. I am seeing issues lately. They seem to upload large photographs (3 MB and above). I am wondering, is it possible to compress (reduce their size) on the client side, instead of doing it on the server (just like facebook does it). I know I could easily do it on the server, but I am working on another project right now, where I am expecting a large flow of photo uploads. It would require significant amount of CPU time to process them all. So I thought, I'd ask about the client side processing. Thanks.

    Read the article

  • CPU Usage relative to number of users? - ASP.Net Application

    - by soldieraman
    My Asp.net application uses 25-30% of the CPU on a test server which has 600 MB Ram on it. I can see the asp_wb process taking that much percentage of CPU. This is when I am testing using one user. How many users can the server afford then without falling over? Is there a relationship between the CPU Usage and number of user aka if there are 2 users my application will sky rocket to 60% of memory usage? Or does/Should/How does the server handle this?

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • expand a varchar column very slowly , why?

    - by francs
    Hi We need to modify a column of a big product table , usually normall ddl statments will be excutely fast ,but the above ddl statmens takes about 10 minnutes?I wonder know the reason! I just want to expand a varchar column?The following is the detailsl --table size wapreader_log= select pg_size_pretty(pg_relation_size('log_foot_mark')); pg_size_pretty ---------------- 5441 MB (1 row) --table ddl wapreader_log= \d log_foot_mark Table "wapreader_log.log_foot_mark" Column | Type | Modifiers -------------+-----------------------------+----------- id | integer | not null create_time | timestamp without time zone | sky_id | integer | url | character varying(1000) | refer_url | character varying(1000) | source | character varying(64) | users | character varying(64) | userm | character varying(64) | usert | character varying(64) | ip | character varying(32) | module | character varying(64) | resource_id | character varying(100) | user_agent | character varying(128) | Indexes: "pk_log_footmark" PRIMARY KEY, btree (id) --alter column wapreader_log= \timing Timing is on. wapreader_log= ALTER TABLE wapreader_log.log_foot_mark ALTER column user_agent TYPE character varying(256); ALTER TABLE Time: 603504.835 ms

    Read the article

  • IE downloads and installs CAB dialog popup upon every page refresh

    I have a signed cab on an aspx page. I am seeing the following inconsistent behavior. Any insights would be highly appreciated. On some machines, the cab is downloaded and installed on every page refresh. On few of those machines, the IE "install cab" dialog pops up on every page refresh, while on the others it pops up only once. Additional info: The CAB contains a .NET DLL The CAB is slightly large (around 30 MB), hence recurring download behavior is a pain Target browsers are IE6 and IE7, and the behavior is common to both!

    Read the article

  • WiX: .Net 3.5 prerequisite

    - by Mike Pateras
    I have a WiX installer that I would like to check for .Net 3.5, and install it if it does not exist. I have the following lines in my wixproj file: <BootstrapperFile Include="Microsoft.Net.Framework.3.5"> <ProductName>.NET Framework 3.5</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Windows.Installer.3.1"> <ProductName>WIndows Installer 3.1</ProductName> </BootstrapperFile> When I create the installer, a DotNetFX35 folder is created, and in it are 4 different versions of .Net (including 3.5), and an installer file. I have two questions: How do I have it only bring in version 3.5 (so that the user doesn't have to install 100+ MB of files)? How do I tell WiX to package these files up into the MSI file, so that the user only has to download 1 file?

    Read the article

  • Pre-load audio files at the client-side for later use

    - by awj
    I'm building an online test which implements audio (mp3) using the native audio player (i.e. non Flash-based). The test shows one question at a time and loads each subsequent question asynchronously. Some questions have an accompanying audio file, others don't, and the audio files can be several MB in size. So what I'm hoping to do is to preload the audio files client-side at the start of the test and then move these into place when the relevant question comes up. So far I've tried loading an audio file into a QuickTime player, then when that question comes up I use jQuery's clone(true) method to copy this into a part of the page which is displayed. However, when I do this the QuickTime player has to reload the audio file from source. Same is true for Windows Media Player. Does anyone have any suggestions as to how I can preload the audio client-side and then call it forward when needed?

    Read the article

  • Control SQL Server CLR Reserved Memory

    - by Ryu
    I've recently enabled CLR on my 64 bit SQL Server 2005 machine for usage of about 3 procs. When I run the following query to gather some info on memory usage... select single_pages_kb+ multi_pages_kb + virtual_memory_committed_kb as TotalMemoryUsage, virtual_memory_reserved_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR' I get 129 mb MemoryUsage and 6.3 gb Virtual Memory Reserved The total memory of the machine is 21 gig. What does reserved virtual memory mean exactly and how can I control the size that is allocated? 6 gig is overkill for what we're doing and the memory would be much better utilized by the sproc cache. I'm concerned this reserved memory will cause swapping to the page file. Please help me take back control of the memory! Thanks

    Read the article

  • Make Sphinx quiet (non-verbose)

    - by J. Pablo Fernández
    I'm using Sphinx through Thinking Sphinx in a Ruby on Rails project. When I create seed data and all the time, it's quite verbose, printing this: using config file '/Users/pupeno/projectx/config/development.sphinx.conf'... indexing index 'user_delta'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.6% done total 7 docs, 159 bytes total 0.042 sec, 3749.29 bytes/sec, 165.06 docs/sec Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff for every record that is created or so. Is there a way to suppress that output?

    Read the article

  • Is there a free private Git repository?

    - by saturngod
    Currently I use http://www.codaset.com/ for a private repository. It's free but it can't be free forever. Codaset is nice git repo and we can write blog and wiki entries in there. I want to use a private repo for my private project. This isn't a commercial project or a big project. I also found http://www.projectlocker.com but the user interface is so poor. So, I want to use something like codaset or github repo, for free at least 1 user and 100 MB git repo.

    Read the article

  • What is an efficient way to erase substrings?

    - by Legend
    I have a long string and a set of <end-index, string> list like the following: long_sentence = "This is a long long long long sentence" indices = [[6, "is"], [8, "is a"], [18, "long"], [23, "long"]] An element 6, "is" indicates that 6 is the end index of the word "is" in the string. I want to get the following string in the end: >> print long_sentence This .... long ......... long sentence" I tried an approach like this: temp = long_sentence for i in indices: temp = temp[:int(i[0]) - len(i[1])] + '.'*(len(i[1])+1) + temp[i[0]+1:] While this seems to be working, it is taking exceptionally long time (more than 6 hours on 5000 strings inside a 300 MB file). Is there a way to speed this up?

    Read the article

  • On Solaris, what is the difference between cut and gcut?

    - by Chris J
    I recently came across this crazy script bug on one of my Solaris machines. I found that cut on Solaris skips lines from the files that it processes (or at least very large ones - 800 MB in my case). > cut -f 1 test.tsv | wc -l 457030 > gcut -f 1 test.tsv | wc -l 840571 > cut -f 1 test.tsv > temp_cut_1.txt > gcut -f 1 test.tsv > temp_gcut_1.txt > diff temp_cut_1.txt temp_gcut_1.txt | grep '[<]' | wc -l 0 My question is what the hell is going on with Solaris cut? My solution is updating my scripts to use gcut but... what the hell?

    Read the article

  • malloc hangs in Linux

    - by Rahul
    I am using Linux on a 16 G with 2 quad core CPU. There are 8 processes which are doing some work (CPU intensive/network i/o). Out of which 4 have a memory leak (These are test conditions so no problem in having leaks here). Total space is occupied by all processes is around 15.4 G only 200 MB is free in system. Things are fine for some hours. But after that malloc hangs (for a process which doesn't have a memory leak). Its stuck for for more than 4 minutes (Note CPU is not 100% but io has gone up signficantly). Now there is no problem in the hanged process. (It has not corrupted the memory). What malloc is doing? (is it tryibg to defragment or builidng up swap space).I am using SUSE 10. Any pointers?

    Read the article

  • Treating a fat webservice in .net 3.5 c#

    - by Chris M
    I'm dealing with an obese 3rd party webservice that returns about 3mb of data for a simple search results, about 50% of the data in that response is junk. Would it make sense then to remap this data to my own result object and ditch the response so I'm storing 1-2 mb in memory for filtering and sorting rather than using the web-responses own object and using 2-4 or am I missing a point? So far I've been accessing the webservice from a separate project and using a new class to provide the interaction and to handle the persistence so my project looks like this |- Web (mvc2 proj) |- DAL (database/storage fluent-nhibernate) |- SVCGateway (interaction layer + webservice related models) |- Services -------------- |- Tests |- Specs I'm trying to make the application behave fast and I also need to store the result set temporarily in case a customer goes to view the product and wants to go back to the results. (Service returns only 500 of possible 14K results). So basically I'm looking for confirmation that I'm doing the right thing in pushing the results into my own objects or if I'm breaking some rule or even if there's a better way of handling it. Thanks

    Read the article

  • Comparing two xml files

    - by Ragini
    I have two large xml files. Almost 1.4 mb each. I want to compare them and see the differing part. I am using linux. Is there any free tool which can do this for me ? Or any other technique ? I used "diff" command in linux and tried to output the result in another file. (diff file1.xml file2.xml result.xml) But the resulted file showed "Could not parse the xml". However it showed something on screen. I would like the differ part to be stored somewhere if possible. (or atleast I should be able to see it properly) Thanks Ragini

    Read the article

  • Adding or reading some of the contact fields in sybian using j2me

    - by learn
    I want to add or read the fields of cotact like i am getting the telephone home no ContactList clist; Contact con; String no; if(cList.isSupportedAttribute(Contact.TEL, Contact.ATTR_HOME)) { con.addString(Contact.TEL, Contact.ATTR_HOME, no); } and mobile no if(cList.isSupportedAttribute(Contact.TEL, Contact.ATTR_MOBILE)) { con.addString(Contact.TEL, Contact.ATTR_MOBILE, mb); } now i want to get the fields internet telephone, push to talk, mobile(home), mobile(business), dtmf, shareview, sip, children, spouse and some more fields please help me.. Thanks in advance

    Read the article

  • Where does Subversion physically stores its DataBase ?

    - by Mika Jacobi
    After reading many introductions, starting guides, and documentation on SVN, I still cannot figure out where is my versioning data stored. I mean physically. I have over 3 GB of code checked in, and the repo is just a few MB large. This is still Voodoo for me. And, as a coder, I don't really believe in Magic. EDIT : A contributor stated that not all the code was stored in the repo, is that true ? I mean, if I delete my local working copy I still can get back my source code for the repository... If so, I still can't understand how such a compression can occur on my code...

    Read the article

  • Does MultiThreading in Java takes much time for task completion?

    - by Geeta
    I have to search for a string in 10 large size files (in zip format 70 MB) and have to print the lines with the search string to corresponding 10 output files.(i.e. file 1 output should be in output_file1...file2--- output_file2). The same program takes 15 mins for a single file. But if use 10 threads to read 10 files and to write in 10 different files it should complete in 15 mins but its taking 40 mins. How can I solve this. Or multithreading will take this much time only?

    Read the article

  • Advantage to parsing Excel Spreadsheet data vs. CSV?

    - by john
    I have tabulated data in an Excel spreadsheet (file size will likely never be larger than 1 mb). I want to use PHP to parse the data and insert in to a MySQL database. Is there any advantage to keeping the file as an .xls/.xlsx and parsing it using a PHP Excel Parsing Library? If so, what are some good libraries to use? Obviuously, I can save the .xls/.xlsx as a CSV and handle the file that way. Thanks!

    Read the article

  • Full GC real time is much more that user+sys times

    - by Stas
    Hi. We have a Web Java based application running on JBoss with allowed maximum heap size of about 1.2 GB (total machine physical memory is 2 GB). At some point the application stops responding (to clients) for several minutes. After some analysis we found out that the culprit is the Full GC. Here's an excerpt from the verbose GC log: 74477.402: [Full GC [PSYoungGen: 3648K-0K(332160K)] [PSOldGen: 778476K-589497K(819200K)] 782124K-589497K(1151360K) [PSPermGen: 102671K-102671K(171328K)], 646.1546860 secs] [Times: user=3.84 sys=3.72, real=646.17 secs] What I don't understand is how is it possible that the real time spent on Full GC is about 11 minutes (646 seconds), while user+sys times are just 7.5 seconds. 7.5 seconds sound to me much more logical time to spend for cleaning <200 MB from the old generation. Where does all the other time go? Thanks a lot.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >