Search Results

Search found 4715 results on 189 pages for 'ram bhat'.

Page 172/189 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Most efficient way of creating tree from adjacency list

    - by Jeff Meatball Yang
    I have an adjacency list of objects (rows loaded from SQL database with the key and it's parent key) that I need to use to build an unordered tree. It's guaranteed to not have cycles. This is taking wayyy too long (processed only ~3K out of 870K nodes in about 5 minutes). Running on my workstation Core 2 Duo with plenty of RAM. Any ideas on how to make this faster? public class StampHierarchy { private StampNode _root; private SortedList<int, StampNode> _keyNodeIndex; // takes a list of nodes and builds a tree // starting at _root private void BuildHierarchy(List<StampNode> nodes) { Stack<StampNode> processor = new Stack<StampNode>(); _keyNodeIndex = new SortedList<int, StampNode>(nodes.Count); // find the root _root = nodes.Find(n => n.Parent == 0); // find children... processor.Push(_root); while (processor.Count != 0) { StampNode current = processor.Pop(); // keep a direct link to the node via the key _keyNodeIndex.Add(current.Key, current); // add children current.Children.AddRange(nodes.Where(n => n.Parent == current.Key)); // queue the children foreach (StampNode child in current.Children) { processor.Push(child); nodes.Remove(child); // thought this might help the Where above } } } } public class StampNode { // properties: int Key, int Parent, string Name, List<StampNode> Children }

    Read the article

  • Terrible DotNetNuke performance

    - by Peter Bridger
    I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible. We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads. The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck. We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live. I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke? Modules installed Publish:Engage (Not currently in use) Page Blaster (Doesn't appear to providing caching when users logged in using Integrated Authentication) SimpleGallery XMod Content Manager IIS Setup Application recycling completely disabled (Apart from a 2am recycle) New findings: 18th March 2010 The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck. Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance. Is there a navigation interface out there that doesn't drain performance so badly?

    Read the article

  • Internet Explorer 8 timeout too quick on page POSTs

    - by cdm9002
    We have an asp.net site running, which has been working fine for some time, but recently I have been experiencing some issues with IE8. On posting some pages - mainly on our development server, although on staging too - we get an occasional "Internet Explore cannot display the webpage" error along with the button asking to diagnose connection problems. IE only seems to wait 10 seconds before timing out. I know that the page itself may take longer to load the first time (on dev and staging). So press F5 and everything then works fine. Is there anything that should be done in the aspx page to tell IE to wait a bit longer? I thought I had read that the default timeout supposed to be 90 seconds or something for browsers. A bit more info: It mostly happens on a POSTing a signup page, but that is just because I test that page and it starts the IIS App, makes the first connection to SQL and pre-caches some information. That first time the page can take 10-15 seconds to come back. IE8 times out after 10 seconds as it has had nothing back. This happens on a dev W7x64 machine with 8GB RAM, as well as on a staging server WIN2008. Having googled around a bit, some people are seeing the same problem, but no conclusive pointers to the problem or a solution. It isn't a connection problem; everything works fine in Firefox, Chrome and even IE7; I have tried with add-ons disabled and resetting IE settings, still happens. Ideas welcome.

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • MVC and repository pattern data effeciency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • MVC and repository pattern data efficiency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • Debugging strategy to find the cause of bad_alloc

    - by SalamiArmi
    I have a fairly serious bug in my program - occasional calls to new() throw a bad_alloc. From the documentation I can find on bad_alloc, it seems to be thrown for these reasons: When the computer runs out of memory (which definitely isn't happening, I have 4GB of RAM, program throws bad_alloc when using less than 5MB (checked in taskmanager) with nothing serious running in the background). If the memory becomes too fragmented to allocate new blocks (which, again, is unlikely - the largest sized block I ever allocate would be about 1KB, and that doesn't get done more than 100 times before the crash occurs). Based on these descriptions, I don't really have anywhere in which a bad_alloc could be thrown. However, the application I am running runs more than one thread, which could possibly be contributing to the problem. By testing all of the objects on a single thread, everything seems to be working smoothly. The only other thing that I can think of that is going on here could be some kind of race-condition caused by calling new() in more than one place at the same time, but I've tried adding mutexes to prevent that behaviour to no effect. Because the program is several hundred lines and I have no idea where the problem actually lies, I'm not sure of what, if any, code snippets to post. Instead, I was wondering if there were any tools that will help me test for this kind of thing, or if there are any general strategies that can help me with this problem. I'm using Microsoft Visual Studio 2008, with Poco for threading.

    Read the article

  • Huge mysql table with Zend Framework

    - by Uffo
    I have a mysql table with over 4 million of data; well the problem is that SOME queries WORK and SOME DON'T it depends on the search term, if the search term has a big volume of data in the table than I get the following error: Fatal error: Allowed memory size of 1048576000 bytes exhausted (tried to allocate 75 bytes) in /home/****/public_html/Zend/Db/Statement/Pdo.php on line 290 I currently have Zend Framework cache for metadata enabled, I have index on all the fields from that table.The site is running on a dedicated server with 2gb of ram. I've also set memory limit to: ini_set("memory_limit","1000M"); Any other things that I can optimize? Those are the types of query that I'm currently using: $do = $this->select() ->where('branche LIKE ?','%'.mysql_escape_string($branche).'%') ->order('premium DESC'); } //For name if(empty($branche) && empty($plz)) { $do = $this->select("MATCH(`name`) AGAINST ('{$theString}') AS score") ->where('MATCH(`name`) AGAINST( ? IN BOOLEAN MODE)', $theString) ->order('premium DESC, score'); } And a few other, but they are pretty much the same. Best Regards

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

  • 2 dimensional array

    - by ankit
    we have these arrays.... $cities = array("nagpur","kanpur","delhi","chd","Noida","mumbai","nagpur"); $names = array("munish","imteyaz","ram","shyam","ankit","Rahul","mohan"); now i want a 2 dimensional array with name of city as key and all the corresponding names as its values. <?php $cities = array("nagpur","kanpur","nagpur","delhi","kanpur"); $names = array("ankit","atul","aman","amit","manu"); foreach ($cities as $i => $value) { echo "\n"; echo $value; $city=$value; $k=0; foreach ($cities as $ii => $m) { If($city==$m) { echo$names[$ii] ; $final[$i][$k]=$names[$ii]; $arr = array($city => array($k =>$names[$ii] )); $k++; } } echo"\n<tr></tr>"; } wat i tried is this.but it doesnt work.help me

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • How to upload files?

    - by Brian Roisentul
    I just wanted to know how to configure FCKEditor to upload files and images to the server where the website is hosted. The relevant part for it's config file(i think) looks like this: FCKConfig.LinkUpload = true ; FCKConfig.LinkUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension ; FCKConfig.LinkUploadAllowedExtensions = ".(7z|aiff|asf|avi|bmp|csv|doc|fla|flv|gif|gz|gzip|jpeg|jpg|mid|mov|mp3|mp4|mpc|mpeg|mpg|ods|odt|pdf|png|ppt|pxd|qt|ram|rar|rm|rmi|rmvb|rtf|sdc|sitd|swf|sxc|sxw|tar|tgz|tif|tiff|txt|vsd|wav|wma|wmv|xls|xml|zip)$" ; // empty for all FCKConfig.LinkUploadDeniedExtensions = "" ; // empty for no one FCKConfig.ImageUpload = true ; FCKConfig.ImageUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension + '?Type=Image' ; FCKConfig.ImageUploadAllowedExtensions = ".(jpg|gif|jpeg|png|bmp)$" ; // empty for all FCKConfig.ImageUploadDeniedExtensions = "" ; // empty for no one Could it be a folder permission problem? Is this part of the config.js alright?

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >