Search Results

Search found 4726 results on 190 pages for 'ram jaiswal'.

Page 143/190 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • SQL Query - 20mil records - Best practice to return information

    - by eqiz
    I have a SQL database that has the following table: Table: PhoneRecords ID(identity Seed) FirstName LastName PhoneNumber ZipCode Very simple straight forward table. This table has over 20million records. I am looking for the best way to do queries that pull out records based off area codes from the table. For instance here is an example query that I have done. SELECT phonenumber, firstname FROM [PhoneRecords] WHERE (phone LIKE '2012042%') OR (phone LIKE '2012046%') OR (phone LIKE '2012047%') OR (phone LIKE '2012083%') OR (phone LIKE '2012088%') OR (phone LIKE '2012841%') As you can see this is an ugly query, but it would get the job done (if I wasn't running into timeout issues) Can anyone tell me the best way for speed/optimization to do the above query to display the results? Currently that query above takes around 2 hours to complete on a 9gb 1600mhz ram, i7 930 quadcore OC'd 4.01ghz. I obviously have the computer power required to do such a query, but still takes too long for queries.

    Read the article

  • Python json memory bloat

    - by Anoop
    import json import time from itertools import count def keygen(size): for i in count(1): s = str(i) yield '0' * (size - len(s)) + str(s) def jsontest(num): keys = keygen(20) kvjson = json.dumps(dict((keys.next(), '0' * 200) for i in range(num))) kvpairs = json.loads(kvjson) del kvpairs # Not required. Just to check if it makes any difference print 'load completed' jsontest(500000) while 1: time.sleep(1) Linux top indicates that the python process holds ~450Mb of RAM after completion of 'jsontest' function. If the call to 'json.loads' is omitted then this issue is not observed. A gc.collect after this function execution does releases the memory. Looks like the memory is not held in any caches or python's internal memory allocator as explicit call to gc.collect is releasing memory. Is this happening because the threshold for garbage collection (700, 10, 10) was never reached ? I did put some code after jsontest to simulate threshold. But it didn't help.

    Read the article

  • Debug Linux kernel pre-decompression stage

    - by Shawn J. Goff
    I am trying to use GDB to debug a Linux kernel zImage before it is decompressed. The kernel is running on an ARM target and I have a JTAG debugger connected to it with a GDB server stub. The target has to load a boot loader. The boot loader reads the kernel image from flash and puts it in RAM at 0x20008000, then branches to that location. I have started GDB and connected to the remote target, then I use GDB's add-symbol-file command like so: add-symbol-file arch/arm/boot/compressed/vmlinux 0x20008000 -readnow When I set a breakpoint for that address, it does trap at the correct place - right when it branches to the kernel. However, GDB shows the wrong line from the source of arch/arm/boot/compressed/head.S. It's 4 lines behind. How can I fix this? I also have tried adding the -s section addr option to add-symbol-file with -s .start 0x20008000; this results in exactly the same problem.

    Read the article

  • How to store images efficiently (memory-wise) while still being able to process them

    - by Sheeo
    I'm working on a silverlight project where users get to create their own Collages. The problem When loading images into memory, I'm using BitmapImage so that they can be displayed directly with the Image control--but they're locked in afterwards. So I've tried storing them seperately aswell, but that just sucks up huge amounts of RAM. So in short, is there a class that'll let me store JPEG images, be able to show them with the image control, and still be able to export it afterwards? All this needs to be efficient--i.e. I'd rather not want any copying to ARGB arrays or use the WriteableBitmap to copy them over. I require to work with large collections of images, up to 300 at most. Any help apreciated!

    Read the article

  • CPU Usage relative to number of users? - ASP.Net Application

    - by soldieraman
    My Asp.net application uses 25-30% of the CPU on a test server which has 600 MB Ram on it. I can see the asp_wb process taking that much percentage of CPU. This is when I am testing using one user. How many users can the server afford then without falling over? Is there a relationship between the CPU Usage and number of user aka if there are 2 users my application will sky rocket to 60% of memory usage? Or does/Should/How does the server handle this?

    Read the article

  • Need help with Drupal bulk mail low open rate for legitimate mailing list

    - by Ron Williams
    I've moved from constant contact to Drupal Simplenews/Mimemail/SMTP. Previously the open rate was around 50% for constant contact, but now it's 4-5% for the same list via the mentioned setup. Mail is getting out from the server, but it's having an issue anyway. Here's the setup: -The e-mail list consists of approximately 80,000 addresses which is queued at 10,000 e-mails per cron run (which runs hourly). -The server is a Dual Core2Quad machine with 2GB of RAM. -When mail is being sent, the mail queue will usually go up to ~1000 at the beginning of the hour before reducing to ~250 by the time the next cron occurs. -Newsletter is themed to display custom style for newsletter on send -Newsletter is received by some, but appears to be bounced by many (based on low open rate_ -I've added SPF, domain keys, and a PTR record to the DNS -Server hostname (listed in ptr) is different from hosted domain -Very low spam number via Spamassassin -IP and domain are not blacklisted -Mail goes out via SMTP module on delivery. Any ideas?

    Read the article

  • Which is faster when animating the UI: a Control or a Picture?

    - by Christopher Walker
    /I'm working with and testing on a computer that is built with the following: {1 GB RAM (now 1.5 GB), 1.7 GHz Intel Pentium Processor, ATI Mobility Radeon X600 GFX} I need scale / transform controls and make it flow smoothly. Currently I'm manipulating the size and location of a control every 24-33ms (30fps), ±3px. When I add a 'fade' effect to an image, it fades in and out smoothly, but it is only 25x25 px in size. The control is 450x75 px to 450x250 px in size. In 2D games such as Bejeweled 3, the sprites animate with no choppy animation. So as the title would suggest: which is easier/faster on the processor: animating a bitmap (rendering it to the parent control during animation) or animating the control it's self?

    Read the article

  • Why does tokyo tyrant slow down exponentially even after adjusting bnum?

    - by HenryL
    Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall. It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?

    Read the article

  • [Ruby] How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: (0..9).sort_by{rand}.map{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle could be used to increase efficiency, but this code is sufficient for many purposes. My problem is that sort_by will transform the range into an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do?

    Read the article

  • How Two Programs Can Talk To Each Other In Java?

    - by Arnon
    My first time here... I want to ?reduce? the CPU usage/ROM usage/RAM usage - in general ?speaking?, all system resources that my App use - how doesn't? :) For this reason i want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which mean, to call the update method(that i wrote) that found in the Main class. How can i call the update method in the Main program from the preferences program? Or in the other hand... There is a way to build preferences window that take system resources just when it's appear? Is this approach - of separating programs and let them talk to each other(somehow) - is a right approach for speeding up my programs? tnx

    Read the article

  • Creating huge images

    - by David Rutten
    My program has the feature to export a hi-res image of the working canvas to the disk. Users will frequently try to export images of about 20,000 x 10,000 pixels @ 32bpp which equals about 800MB. Add that to the serious memory consumption already going on in your average 3D CAD program and you'll pretty much guarantee an out-of-memory crash on 32-bit platforms. So now I'm exporting tiles of 1000x1000 pixels which the user has to stitch together afterwards in a pixel editor. Is there a way I can solve this problem without the user doing any work? I figured I could probably write a small exe that gets command-lined into the process and performs the stitching automatically. It would be a separate process and it would thus have 2GB of ram all to itself. Or is there a better way still? I'd like to support jpg, png and bmp so writing the image as a bytestream to the disk is not really possible.

    Read the article

  • What information do you capture your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • Windows Azure local development environment speed

    - by Paperjam
    I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive. Am I doing something wrong or is that simply how things are developing for Azure? My specs: Visual Studio 2008 9.0.30729.1 SP 5 projects running on .NET 3.5 SP1 Azure SDK 1.1 (February 2010) Single instance of a single web role Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds

    Read the article

  • Updating table takes very long time

    - by rrejc
    Hi all, I have a table in MsSQL Server 2008 (SP2) containing 30 millios of rows, table size 150GB, there are a couple of int columns and two nvarchar(max) columns: one containing text (from 1-30000 characters) and one containg xml (up to 100000 characters). Table doesn't have any primary keys or indexes (its is a staging table). So atm I am running a query: UPDATE [dbo].[stage_table] SET [column2] = SUBSTRING([column1], 1, CHARINDEX('.', [column1])-1); the query is running for 3 hours (and it is still not completed), which I think is too long. Is It? I can see that there is constant read rate of 5MB/s and write rate of 10MB/s to .mdf file. How can I find out why the query is running so long? The "server" is i7, 24GB of ram, SATA disks on RAID 10. Many thanks!

    Read the article

  • A 110 kb .NET 4.0 app needs 10 seconds for a cold start, thats not acceptable !

    - by msfanboy
    Hello, I am using the .NET 4.0 client profile for my app and I run a dual core with 4 GB Ram and a fast hard disk. nothing big is done at the start just showing a generic List in a wpf listview. How can I make the cold start faster of my assembly ? I have done now again a cold start and run the windowsapplication.exe in my \obj\x86\Debug folder and my harddisk run like hell and it took 10,5 seconds ??? What is wrong? The warm start after the cold one took 1 second. Java 6 apps has not that problem, not at all just to compare...

    Read the article

  • What information do you capture when your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • Allocated Private Bytes keeps going up in one computer but not the other

    - by Jacob
    OK, this may sound weird, but here goes. There are 2 computers, A (Pentium D) and B (Quad Core) with almost the same amount of RAM. If I run the same code on both computers, the allocated private bytes in A never goes down resulting in a crash later on. In B it looks like the private bytes is constantly deallocated and everything looks fine. In both computers, the working set is deallocated and allocated similarly. Could this be an issue with manifests or DLLs (system)? I'm clueless. Note: I observed the utilized memory with Process Explorer.

    Read the article

  • Mono 2.10.5 Runtime error on Ubuntu 11.10

    - by johnluetke
    I've install mono-runtime via apt in order to run my Mono console application on Ubuntu via SSH. However, when I run the command mono myapp.exe, It exits, with no message, and my program does nothing. If I throw the -v switch to Mono, such as mono -v myapp.exe, I get about 10k lines of output (as expected, -v is verbose), with the first few lines being: converting method System.OutOfMemoryException:.ctor (string) Method System.OutOfMemoryException:.ctor (string) emitted at 0xb7052c28 to 0xb7052c4b (code length 35) [myapp.exe] converting method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) Method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) emitted at 0xb7052c68 to 0xb7052cf6 (code length 142) [myapp.exe] converting method System.SystemException:.ctor (string) I read this as the runtime throwing an OutOfMemory exception, but the machine is under no intense load, has plenty of available RAM, and is running nothing other that system processes. I've removed and reinstalled Mono countless times, and have even run the executable on other machines perfectly fine. Am I missing something completely obvious here?

    Read the article

  • Android -- Object Creation/Memory Allocation vs. Performance

    - by borg17of20
    Hello all, This is probably an easy one. I have about 20 TextViews/ImageViews in my current project that I access like this: ((TextView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Text)).setText(""); //or ((ImageView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Right)).setVisibility(View.INVISIBLE); My question is this, am I better off, from a performance standpoint, just assigning these object variables? Further, am I losing some performance to the constant "search" process that goes on as a part of the findViewById(...) method? (i.e. Does findsViewById(...) use some sort of hashtable/hashmap for look-ups or does it implement an iterative search over the view hierarchy?) At present, my program never uses more than 2.5MB of RAM, so will assigning 20 or so more object variables drastically affect this? I don't think so, but I figured I'd ask. Thanks.

    Read the article

  • how to Calculate number of address bits needed for memory?

    - by leon
    Hi I am preparing my exam for computer system. I don't quite understand how to calculate the number of address bits needed for the memory. For example, Suppose that a 1G x 32-bit main memory is built using 256M x 4-bit RAM chips and this memory is word-addressable. What is the number of address bits needed for a memory module? What is the number of address bits needed for the full memory? And what about If the memory is byte addressable, what would be the solutions? Many thanks

    Read the article

  • Tomcat application: Frequent OutOfMemory PermGen exception while image uploads

    - by rabbit
    Hi, I have a tomcat 6 application which I have set parameters of -Xms512m -Xmx1024m. I thought 1 GB of memory in a 4 GB RAM would be enough, but that is not the case. On application stop/start multiple times (from tomcat manager) and also on image uploads (sometimes) I run into the OutOfMemory PermGen space error and the site stops responding. Should I increase the memory still some more? Is there anything else that I can do to from the tomcat side so that it does not run into the PermGen space issue? Thanks in advance for pointers/tips etc.

    Read the article

  • UITableView with contact image problems.

    - by prathumca
    As par my app requirement, I'm showing the contact images in a UITableView as shown below. ABRecordRef contact = [self getContact]; if(contact && ABPersonHasImageData(contact)) { UIImage *contactImage = [UIImage imageWithData:(NSData*)ABPersonCopyImageData(contact)]; callImage.image = contactImage; } I've two problems if I use the above code segment. Table Scrolling is too slow. If comment the above code, then UITable responds very fast. Memory Management. My app started using 25 - 30 MB of RAM. Is there any better way to avoid the above two problems?

    Read the article

  • If my application doesn't use a lot of memory, can I ignore viewDidUnload:?

    - by iPhoneToucher
    My iPhone app generally uses under 5MB of living memory and even in the most extreme conditions stays under 8MB. The iPhone 2G has 128MB of RAM and from what I've read an app should only expect to have 20-30MB to use. Given that I never expect to get anywhere near the memory limit, do I need to care about memory warnings and setting objects to nil in viewDidUnload:? The only way I see my app getting memory warnings is if something else on the phone is screwing with the memory, in which case the entire phone would be acting silly. I built my app without ever using viewDidUnload:, so there's more than a hundred classes that I'd need to inspect and add code to if I did need to implement it.

    Read the article

  • Amazon EC2 prices for Windows Instance?

    - by Abhishek Gupta
    Hello Guys , I want to ask from some Amazon cloud technology Experts , that is it profitable to deploy our web application on amazon cloud as compared to normal server? Currently there are micro,small, large and other types of instances available , if we start from micro instance then we realize that our app needs some more CPU cycle and Ram then how can we dynamically move to next more powerful instance automatically at runtime. What is the approx minimum yearly cost for a single EC2 windows small instance? I wnat to deploy a simple Online quiz application (ASP.net based) on Amazon Cloud which at a time can have maximum of 500 users only. Please suggest me as I m very new to Cloud .Should I go for Azure or Amazon?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >