Search Results

Search found 4715 results on 189 pages for 'ram bhat'.

Page 142/189 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How does functional programming work?

    - by Headcrab
    I'm used to imperative/OO programming (know C, C++, Python, PHP, etc.). I wanted to get into functional programming but there are some things unclear to me. Take for example the languages F# and Haskell: How do you implement loops? By recursion? Eew. What about conditions? How can you get by without variables? I mean.. What do we have RAM for.. storing variables, right?

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • PagedDataSource does not support serialization - how can I enforce this ?

    - by Darkyo
    Sounds like I want to override a physics law, but at least it is the most reasonnable solution, cpu / HDD and Ram effective for my asp.net project. In fact, I got a pageddataSource and a customDataReader that supports paginated data. The truth is my data are in a viewstate variable, because it is re-used in an update panel. When I intend to use it into my pageddatasource, asp.net 3.5 kills me with a System.Web.UI.WebControls.PagedDataSource' in Assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not marked as serializable. cool exception... So I'd rather not offend newton because I know he'll always win, but I would need some help to enforce this pagedDataSource law, that seems so unbelievable, except if someone has an explanation.

    Read the article

  • What knowledge/expertize is required to port android to custom arm device ?

    - by Sunny
    Hi Friends, I am working on a system on which currently linux kernel and microwindows windowing system is running. Code of current linux system drivers is available to me. I want to port android on it, just as a hobby project. can you please tell me what all understanding of linux-kernel is required to port it? Please give me references (Books, Tutorials) to build-up understandings. Thanks, Sunny. P.S. I have basic understanding of linux. Configuration of device is 450 Mhz ARM9, 64 MB RAM, 256 MB NAND 480x272 resolution.

    Read the article

  • Find Loding performance of the Website

    - by pandora
    How to find the site performance, there is a tools like YSLOW, Speed traker in google that shows the speed of the website. I have done a php project on LMS with Zend Framework, Everything is in live. When user post contents for a subject that may be size 200K and submitted to the server takes too slow. Sometime server may get DOWN. I login to server(PUTTY) and checked i found that there is more resource occupied in my server. It uses full memory on the server. When i cleared the resource the site loads well. Site is in Dedicated server with 3 more domains with 4GB Ram. Because of this LMS website all the website gets down. I need to check what is wrong in my website. How do i Start?

    Read the article

  • Apache Prefork Configuration

    - by user1618606
    I'm newbie on VPS configuration. So, I've installed apache, php and mysql and now I need to know how to configure Prefork to optimize Apache. The system configuration is: CPU Cores 2 x 2 Ghz @ 4 Ghz RAM Memory 2304 MB DDR3 Burst Memory 3 GB DDR3 Disk Space 30 GB SSD Bandwidth 3 TB SwitchPort 1 Gbps Actually, after linux, mysql, apache and php, there are 250 MB memory in use. Well, I don't have idea to calculate. I saw in some websistes, some vars like: KeepAlive On KeepAliveTimeout 1 MaxKeepAliveRequests 100 StartServers 15 MinSpareServers 15 MaxSpareServers 15 MaxClients 20 MaxRequestsPerChild 0 or StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 How I could to do: Prefork or worker? Where and how the vars are placed? In httpd.conf? Big hug, Claudio.

    Read the article

  • convert string data array to list

    - by prince23
    hi, i have an string data array which contains data like this 5~kiran 2~ram 1~arun 6~rohan now a method returns an value like string [] data public string [] names() { return data.Toarray() } public class Person { public string Name { get; set; } public int Age { get; set; } } List persons = new List(); string [] names =names(); now i need to copy all the data from an string array to an list and finally bind to grid view gridview.datasoutrce= persons how can i do it. is there any built in method to do it thanks in advance prince

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • Cassandra performance slow down with counter column

    - by tubcvt
    I have a cluster (4 node ) and a node have 16 core and 24 gb ram: 192.168.23.114 datacenter1 rack1 Up Normal 44.48 GB 25.00% 192.168.23.115 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.116 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.117 datacenter1 rack1 Up Normal 44.51 GB 25.00% We use about 10 column family (counter column) to make some system statistic report. Problem on here is that When i set replication_factor of this keyspace from 1 to 2 (contain 10 counter column family ), all cpu of node increase from 10% ( when use replication factor=1) to --- 90%. :( :( who can help me work around that :( . why counter column consume too much cpu time :(. thanks all

    Read the article

  • unable to install mysql completely on debian 5.0

    - by austin powers
    hi, its been a couple of days that I'm trying to install mysql on my vps which has debian 5.0 with 256mb ram. I've installed webmin also. here is the symptoms : after installing mysql using either webmin or apt-get I am trying to connect to mysql for changing root password but every time I cope with this error : ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) so I start to investigate and I understand there is no root user inside mysql database when I use : UPDATE user SET password=PASSWORD('newpassword') WHERE user="root"; it says 0 row affected I reinstall mysql for several times but the same problem still exits. please help me how can I install mysql-server as well as mysql-client correctly. regards.

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • mounting without -o loop

    - by jumpinjoe
    Hi, I have written a dummy (ram disk) block device driver for linux kernel. When the driver is loaded, I can see it as /dev/mybd. I can successfully transfer data onto it using dd command, compare the copied data successfully. The problem is that when I create ext2/3 filesystem on it, I have to use -o loop option with the mount command. Otherwise mount fails with following result: mount: wrong fs type, bad option, bad superblock on mybd, missing codepage or helper program, or other error What could be the problem? Please help. Thanks.

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • How to estimate memory need by XPathDocument for a specific xml file

    - by bill seacham
    Is there any way to estimate the memory requirement for creating an XpathDocument instance based on the file size of the xml? XpathDocument xdoc = new XpathDocument(xmlfile); Is there any way to programmatically stop the process of creating the XpathDocument if memory drops to a very low level? Since it loads the entire xml into memory, it would be nice to know ahead of time if the xml is too big. What I have found is that when I create a new XpathDocument with a big xml file, an outofmemory exception is never fired, but that the process slows to a crawl, only 5 Mb of memory remains a available and the Task Manager reports it is not responding. This happened with a 266 Mb xml file when there was 584 Mb of ram. I was able to load a 150 Mb file with no problems in 18. After loading the xml, I want to do xpath queries using an XpathNavigator and an XpathNodeIterator. I am using .net 2.0, xp sp3.

    Read the article

  • Python json memory bloat

    - by Anoop
    import json import time from itertools import count def keygen(size): for i in count(1): s = str(i) yield '0' * (size - len(s)) + str(s) def jsontest(num): keys = keygen(20) kvjson = json.dumps(dict((keys.next(), '0' * 200) for i in range(num))) kvpairs = json.loads(kvjson) del kvpairs # Not required. Just to check if it makes any difference print 'load completed' jsontest(500000) while 1: time.sleep(1) Linux top indicates that the python process holds ~450Mb of RAM after completion of 'jsontest' function. If the call to 'json.loads' is omitted then this issue is not observed. A gc.collect after this function execution does releases the memory. Looks like the memory is not held in any caches or python's internal memory allocator as explicit call to gc.collect is releasing memory. Is this happening because the threshold for garbage collection (700, 10, 10) was never reached ? I did put some code after jsontest to simulate threshold. But it didn't help.

    Read the article

  • SQL Query - 20mil records - Best practice to return information

    - by eqiz
    I have a SQL database that has the following table: Table: PhoneRecords ID(identity Seed) FirstName LastName PhoneNumber ZipCode Very simple straight forward table. This table has over 20million records. I am looking for the best way to do queries that pull out records based off area codes from the table. For instance here is an example query that I have done. SELECT phonenumber, firstname FROM [PhoneRecords] WHERE (phone LIKE '2012042%') OR (phone LIKE '2012046%') OR (phone LIKE '2012047%') OR (phone LIKE '2012083%') OR (phone LIKE '2012088%') OR (phone LIKE '2012841%') As you can see this is an ugly query, but it would get the job done (if I wasn't running into timeout issues) Can anyone tell me the best way for speed/optimization to do the above query to display the results? Currently that query above takes around 2 hours to complete on a 9gb 1600mhz ram, i7 930 quadcore OC'd 4.01ghz. I obviously have the computer power required to do such a query, but still takes too long for queries.

    Read the article

  • CPU Usage relative to number of users? - ASP.Net Application

    - by soldieraman
    My Asp.net application uses 25-30% of the CPU on a test server which has 600 MB Ram on it. I can see the asp_wb process taking that much percentage of CPU. This is when I am testing using one user. How many users can the server afford then without falling over? Is there a relationship between the CPU Usage and number of user aka if there are 2 users my application will sky rocket to 60% of memory usage? Or does/Should/How does the server handle this?

    Read the article

  • How to store images efficiently (memory-wise) while still being able to process them

    - by Sheeo
    I'm working on a silverlight project where users get to create their own Collages. The problem When loading images into memory, I'm using BitmapImage so that they can be displayed directly with the Image control--but they're locked in afterwards. So I've tried storing them seperately aswell, but that just sucks up huge amounts of RAM. So in short, is there a class that'll let me store JPEG images, be able to show them with the image control, and still be able to export it afterwards? All this needs to be efficient--i.e. I'd rather not want any copying to ARGB arrays or use the WriteableBitmap to copy them over. I require to work with large collections of images, up to 300 at most. Any help apreciated!

    Read the article

  • Debug Linux kernel pre-decompression stage

    - by Shawn J. Goff
    I am trying to use GDB to debug a Linux kernel zImage before it is decompressed. The kernel is running on an ARM target and I have a JTAG debugger connected to it with a GDB server stub. The target has to load a boot loader. The boot loader reads the kernel image from flash and puts it in RAM at 0x20008000, then branches to that location. I have started GDB and connected to the remote target, then I use GDB's add-symbol-file command like so: add-symbol-file arch/arm/boot/compressed/vmlinux 0x20008000 -readnow When I set a breakpoint for that address, it does trap at the correct place - right when it branches to the kernel. However, GDB shows the wrong line from the source of arch/arm/boot/compressed/head.S. It's 4 lines behind. How can I fix this? I also have tried adding the -s section addr option to add-symbol-file with -s .start 0x20008000; this results in exactly the same problem.

    Read the article

  • Which is faster when animating the UI: a Control or a Picture?

    - by Christopher Walker
    /I'm working with and testing on a computer that is built with the following: {1 GB RAM (now 1.5 GB), 1.7 GHz Intel Pentium Processor, ATI Mobility Radeon X600 GFX} I need scale / transform controls and make it flow smoothly. Currently I'm manipulating the size and location of a control every 24-33ms (30fps), ±3px. When I add a 'fade' effect to an image, it fades in and out smoothly, but it is only 25x25 px in size. The control is 450x75 px to 450x250 px in size. In 2D games such as Bejeweled 3, the sprites animate with no choppy animation. So as the title would suggest: which is easier/faster on the processor: animating a bitmap (rendering it to the parent control during animation) or animating the control it's self?

    Read the article

  • Need help with Drupal bulk mail low open rate for legitimate mailing list

    - by Ron Williams
    I've moved from constant contact to Drupal Simplenews/Mimemail/SMTP. Previously the open rate was around 50% for constant contact, but now it's 4-5% for the same list via the mentioned setup. Mail is getting out from the server, but it's having an issue anyway. Here's the setup: -The e-mail list consists of approximately 80,000 addresses which is queued at 10,000 e-mails per cron run (which runs hourly). -The server is a Dual Core2Quad machine with 2GB of RAM. -When mail is being sent, the mail queue will usually go up to ~1000 at the beginning of the hour before reducing to ~250 by the time the next cron occurs. -Newsletter is themed to display custom style for newsletter on send -Newsletter is received by some, but appears to be bounced by many (based on low open rate_ -I've added SPF, domain keys, and a PTR record to the DNS -Server hostname (listed in ptr) is different from hosted domain -Very low spam number via Spamassassin -IP and domain are not blacklisted -Mail goes out via SMTP module on delivery. Any ideas?

    Read the article

  • What information do you capture your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • Why does tokyo tyrant slow down exponentially even after adjusting bnum?

    - by HenryL
    Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall. It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?

    Read the article

  • How Two Programs Can Talk To Each Other In Java?

    - by Arnon
    My first time here... I want to ?reduce? the CPU usage/ROM usage/RAM usage - in general ?speaking?, all system resources that my App use - how doesn't? :) For this reason i want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which mean, to call the update method(that i wrote) that found in the Main class. How can i call the update method in the Main program from the preferences program? Or in the other hand... There is a way to build preferences window that take system resources just when it's appear? Is this approach - of separating programs and let them talk to each other(somehow) - is a right approach for speeding up my programs? tnx

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >