Search Results

Search found 2768 results on 111 pages for 'heap dump'.

Page 21/111 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Container Options in AWS Elastic Beanstalk

    - by Sangram Anand
    We have deployed a java webapplication in Elastic Beanstalk with the minimum instance count 1 and max instance count 2 for Autoscaling. The custom AMI we are using is c1.medium with Sun JDK 6. The environment status changed to yellow and then red. After checking into the log file from the snapshot logs we found a exception - Caused by: java.lang.OutOfMemoryError: Java heap space. Assuming this could be one of the possible reason for the Environment failure. The settings that we have configured in the Environment Container option are Initial JVM Heap Size (MB) - 256M Maximum JVM Heap Size (MB) - 512m The maximum heap size the java virtual machine will ever consume, specified on the JVM launch command line using -Xmx. Maximum JVM Permanent Generation Size (MB) - 512m Should i increase the Heap size from 512m to more or is it fine.

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • How can I dump my MS SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • How can I dump my SQL Server Database Schema to a human readable & printable format?

    - by Kyle West
    I want to generate something like the following: LineItems Id ItemId OrderId Price Orders Id CustomerId DateCreated Customers Id FirstName LastName Email I don't need all the relationships, the diagram that will never print correctly, the metadata, anything. Just a list of the tables and their columns in a simple text format. Has anyone done this before? Is there a simple solution? Thanks, Kyle

    Read the article

  • Can I get a dump of all my databases *except one* using mysqldump?

    - by Daniel Magliola
    I'm currently using mySQLdump to backup my dev machine and servers. There is one project I just started, however, that has a HUUUUUGE database that I don't really need backed up, and i'll be a big problem to add it to the rest of the backup cycle. I'm currently doing this: "c:\Program Files\mysql\MySQL Server 5.1\bin\mysqldump" -u root -pxxxxxx --all-databases g:\backups\MySQL\mysqlbackup.sql Is it possible to somehow specify "except this database(s)"? I wouldn't like to have to specify the list of DBs manually, since that would mean that I'd have to remember updating my backup batch file every time I create a new DB, and I know that's not gonna happen. EDIT: As you probably guessed from my command line above, i'm doing this on Windows, so I can't do any kind of fancy bash stuff, only wimpy .bat things. Alternatively, if you have other ideas to solve this same issue, they are more than welcome, of course! Thanks Daniel

    Read the article

  • Where to add the PHP script and MYSQL Dump code in the LAMP server?

    - by primal
    Hi, I dont know if this is the right place to ask this question. But as time is limited and here I have always got the right answers I am asking it away. I want to setup a login page on a local server so as to communicate with it via android. As time permits, I googled and found out the necessary php script and mySQL code needed for the same. I don't know where to add these code in the local server and connect them. :( (This is mainly for testing purpose as we would develop a website in Django latter on.) Any help would be much appreciated..

    Read the article

  • Dump Linq-To-Sql now that Entity Framework 4.0 has been released?

    - by DanM
    The relative simplicity of Linq-To-Sql as well as all the criticism leveled at version 1 of Entity Framework (especially, the vote of no confidence) convinced me to go with Linq-To-Sql "for the time being". Now that EF 4.0 is out, I wonder if it's time to start migrating over to it. Questions: What are the pros and cons of EF 4.0 relative to Linq-To-Sql? Is EF 4.0 finally ready for prime time? Is now the time to switch over?

    Read the article

  • How to sanely read and dump structs to disk when some fields are pointers?

    - by bp
    Hello, I'm writing a FUSE plugin in C. I'm keeping track of data structures in the filesystem through structs like: typedef struct { block_number_t inode; filename_t filename; //char[SOME_SIZE] some_other_field_t other_field; } fs_directory_table_item_t; Obviously, I have to read (write) these structs from (to) disk at some point. I could treat the struct as a sequence of bytes and do something like this: read(disk_fd, directory_table_item, sizeof(fs_directory_table_item_t)); ...except that cannot possibly work as filename is actually a pointer to the char array. I'd really like to avoid having to write code like: read(disk_df, *directory_table_item.inode, sizeof(block_number_t)); read(disk_df, directory_table_item.filename, sizeof(filename_t)); read(disk_df, *directory_table_item.other_field, sizeof(some_other_field_t)); ...for each struct in the code, because I'd have to replicate code and changes in no less than three different places (definition, reading, writing). Any DRYer but still maintainable ideas?

    Read the article

  • How to dump STDIN to a file, using C++ STL?

    - by Jimm Chen
    HHello all, this is a straight forward question, but not a straight forward answer can be found by just Googling today. Hope someone can show me a concise answer before I dig into those thick C++ books and finally find the solution out. Thank you. I'm writing this program so to make a workaround in this issue: Why do I get 'Bad file descriptor' when trying sys.stdin.read() in subversion pre-revprop-change py script? Note: Content from STDIN may be arbitrary binary data. Please use C++ STL functions, iostream, ifstream etc . If the file creation/writing failed, I'd like to catch the exception to know the case.

    Read the article

  • How do I dump arrays in a nice orderly fashion?

    - by George
    Whenever I use print_r or var_dump they come out all sloppy and in one line instead of formatted like I see so many people and the actual php.net site being able to achieve. What do I do to get them like this - Array ( [a] => apple [b] => banana [c] => Array ( [0] => x [1] => y [2] => z ) ) Instead of this Array([a] => apple [b] => banana [c] => Array ( [0] => x [1] => y [2] => z )) (Pretty sure it comes out even messier than that I was just deleting whitespace by hand.)

    Read the article

  • How to to dump JS array... (boommarklet?)

    - by Soulhuntre
    A page on a site I use is holding some of my data hostage. Once I have logged into the site and navigated to the right page, the data I need is in the array eeData[] - it is 720 elements long (once every 2 minutes of a given day). Rather than simulate the requests to the underlying stuff json supplier and since its only once a day, I am happy to simply develop a bookmarklet to grab the data - preferably as a XML or CSV file. Any pointers to sample code or hints would help. I found a bookmarklet here that is based on this script that does part of this - but I am not up to speed on any potential JS file IO to see if it is possible to induce a file "download" of the data, or pop it opn in a new window I can copy / paste.

    Read the article

  • Looking for a script/tool to dump a list of installed features and programs on Windows Server 2008 R

    - by Hamish Grubijan
    Hi, The same compiled .Net / C++ / Com program does different things on two seemingly same computers. Both have DOZENS of things installed on them. I would like to figure out what the difference between the two is by looking at an ASCII diff. Before that I need to "serialize" the list of installed things in a plain readable format - sorted alphabetically + one item per line. A Python script would be ideal, but I also have Perl, PowerShell installed. Thank you.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • Poor LLVM JIT performance

    - by Paul J. Lucas
    I have a legacy C++ application that constructs a tree of C++ objects. I want to use LLVM to call class constructors to create said tree. The generated LLVM code is fairly straight-forward and looks repeated sequences of: ; ... %11 = getelementptr [11 x i8*]* %Value_array1, i64 0, i64 1 %12 = call i8* @T_string_M_new_A_2Pv(i8* %heap, i8* getelementptr inbounds ([10 x i8]* @0, i64 0, i64 0)) %13 = call i8* @T_QueryLoc_M_new_A_2Pv4i(i8* %heap, i8* %12, i32 1, i32 1, i32 4, i32 5) %14 = call i8* @T_GlobalEnvironment_M_getItemFactory_A_Pv(i8* %heap) %15 = call i8* @T_xs_integer_M_new_A_Pvl(i8* %heap, i64 2) %16 = call i8* @T_ItemFactory_M_createInteger_A_3Pv(i8* %heap, i8* %14, i8* %15) %17 = call i8* @T_SingletonIterator_M_new_A_4Pv(i8* %heap, i8* %2, i8* %13, i8* %16) store i8* %17, i8** %11, align 8 ; ... Where each T_ function is a C "thunk" that calls some C++ constructor, e.g.: void* T_string_M_new_A_2Pv( void *v_value ) { string *const value = static_cast<string*>( v_value ); return new string( value ); } The thunks are necessary, of course, because LLVM knows nothing about C++. The T_ functions are added to the ExecutionEngine in use via ExecutionEngine::addGlobalMapping(). When this code is JIT'd, the performance of the JIT'ing itself is very poor. I've generated a call-graph using kcachegrind. I don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, Schedule... is called 16K times and setHeightToAtLeas... is called 37K times. On the right fork, RAGreed... is called 35K times. Those are far too many calls to anything for what's mostly a simple sequence of call LLVM instructions. Something seems horribly wrong. Any ideas on how to improve the performance of the JIT'ing?

    Read the article

  • Java Generic Casting Type Mismatch

    - by Kay
    public class MaxHeap<T extends Comparable<T>> implements Heap<T>{ private T[] heap; private int lastIndex; public void main(String[] args){ int i; T[] arr = {1,3,4,5,2}; //ERROR HERE ******* foo } public T[] Heapsort(T[]anArray, int n){ // build initial heap T[]sortedArray = anArray; for (int i = n-1; i< 0; i--){ //assert: the tree rooted at index is a semiheap heapRebuild(anArray, i, n); //assert: the tree rooted at index is a heap } //sort the heap array int last = n-1; //invariant: Array[0..last] is a heap, //Array[last+1..n-1] is sorted for (int j=1; j<n-1;j++) { sortedArray[0]=sortedArray[last]; last--; heapRebuild(anArray, 0, last); } return sortedArray; } protected void heapRebuild(T[ ] items, int root, int size){ foo } } The error is on the line with "T[arr] = {1,3,4,5,2}" Eclispe complains that there is a: "Type mismatch: cannot convert from int to T" I've tried to casting nearly everywhere but to no avail.A simple way out would be to not use generics but instead just ints but that's sadly not an option. I've got to find a way to resolve the array of ints "{1,3,4,5,2}" into an array of T so that the rest of my code will work smoothly.

    Read the article

  • Memory allocation patterns in C++

    - by Mahatma
    I am confused about the memory allocation in C++ in terms of the memory areas such as Const data area, Stack, Heap, Freestore, Heap and Global/Static area. I would like to understand the memory allocation pattern in the following snippet. Can anyone help me to understand this. If there any thing more apart from the variable types mentioned in the example to help understand the concept better please alter the example. class FooBar { int n; //Stored in stack? public: int pubVar; //stored in stack? void foo(int param) //param stored in stack { int *pp = new int; //int is allocated on heap. n = param; static int nStat; //Stored in static area of memory int nLoc; //stored in stack? string str = "mystring"; //stored in stack? .. if(CONDITION) { static int nSIf; //stored in static area of memory int loopvar; //stored in stack .. } } } int main(int) { Foobar bar; //bar stored in stack? or a part of it? Foobar *pBar; //pBar is stored in stack pBar = new Foobar(); //the object is created in heap? What part of the object is stored on heap } EDIT: What confuses me is, if pBar = new Foobar(); stores the object on the heap, how come int nLoc; and int pubVar;, that are components of the object stored on stack? Sounds contradictory to me. Shouldn't the lifetime of pubvar and pBar be the same?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >