Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 346/409 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • Incoming connections from 2 different Internet connections

    - by RJClinton
    I am hosting a Gameserver on my Windows 2003 Server 32 Bit Server. Due to some limitations of my ISP, I have to take 2 Internet connections in order to satisfy my bandwidth requirement. I am facing serious problems here. The Internet is supplied using Wimax. I am given a POE and the RJ45 from both the POE (2 connections) are plugged into a Netgear 100mbps Switch and another RJ45 connects the server to the switch. I have tried to configure the IP addresses and gateway of both connections to the single NIC in the server. The IP series and the gateway of each Internet connection are very much different. I know that this is not the proper approach, because alternate gateways are meant to be only for backup in case of main gateway failure and the usage of the gateways depends upon the metric (correct me if I am wrong). I am planning to get a second PCI NIC so that I can connect each of the Internet connections into its respective NIC. If I use this approach, will I be able to accept connections on both NICs? Also, please suggest any other alternatives that I might use.

    Read the article

  • Symfony / Doctrine - How to filter form field by property in related model

    - by Dan
    I have a UserForm class which has a select list populated from a related model (specified by a foreign relationship in the yml) like so: $this->setWidget('report_id', new sfWidgetFormDoctrineChoice(array('model' => $this->getRelatedModelName('Report')))); I'd like to filter the Report objects that come from this relation by one of the Report fields, "active" such that only Reports with active=1 appear in the form. I have a method, ReportTable::GetActiveReports() that performs the appropriate query and returns the filtered reports. So one option is to populate the Widget with the results of that function. Any tips on the syntax to do that? It seems to me the cleaner way is to use the UserFormFilter class to filter the reports by active=1 there. Unfortunately I couldn't find any documentation on how to use form filters (or really what they are), so maybe this is not the right solution. Is a Form Filter the appropriate tool for this job? It seems I should use the Doctrine_Record_Filter_Standard class as defined here: http://www.doctrine-project.org/api/orm/1.2/doctrine/doctrine_record_filter_standard.html But it's not clear to me the appropriate usage. Any guidance would be helpful. Thanks! Dan

    Read the article

  • What are good NoSQL and non-relational database solutions for audit/logging database

    - by Juha Syrjälä
    What would be suitable database for following? I am especially interested about your experiences with non-relational NoSQL systems. Are they any good for this kind of usage, which system you have used and would recommend, or should I go with normal relational database (DB2)? I need to gather audit trail/logging type information from bunch of sources to a centralized server where I could generate reports efficiently and examine what is happening in the system. Typically a audit/logging event would consist always of some mandatory fields, for example globally unique id (some how generated by program that generated this event) timestamp event type (i.e. user logged in, error happened etc) some information about source (server1, server2) Additionally the event could contain 0-N key-value pairs, where value might be up to few kilobytes of text. It must run on Linux server It should work with high amount of data (100GB for example) it should support some kind of efficient full text search It should allow concurrent reading and writing It should be flexible to add new event types and add/remove key-value pairs to new events. Flexible=no changes should be required to database schema, application generating the events can just add new event types/new fields as needed. it should be efficient to make queries against database. For reporting and exploring what happened. For example: How many events with type=X occurred in some time period. Get all events where field A has value Y. Get all events with type X and field A has value 1 and field B is not 2 and event occurred in last 24h

    Read the article

  • User (MS-Office) generated content - how?

    - by Avi
    How can I allow users to share Microsoft Office generated content on an ASP.Net site? For example usage, imagine a site similar to StackOverflow. George, writing a question, uses Word, Excel or OneNote to create content, and then inserts the content into the question area (probably copying it into the clipboard and then using some "paste from office" widget). Harry, who doesn't have MS-Office on his computer, can still see in his browser the content George has generate. If Harry wants to add content, he can use the built in editor, same like in Stackoverflow, and have to be satisfied with lesser functionality. Sue, who has MS-Office installed, can of course see the content in the browser just like Harry. In addition, she can "export" this content and process it in the application George used to generate it. So, how do I do it? Would Save/Export to HTML feature work? Any tools? Samples? Articles? Office 2007 or later is OK.

    Read the article

  • Any way for a class to prevent outside code from declaring variables of its type?

    - by supercat
    Is it possible for a class of exposing a type for function returns, without allowing users of that class to create variables of that type? A couple usage scenarios: A Fluent interface on a large class; a statement like "foo=bar.WithX(5).WithY(9).WithZ(19);" would be inefficient if it had to create three new instances of the class, but could be much more efficient if the WithX could create one instance, and the other statements could simply use it. A class may wish to support a statement like "foo[19].x = 9;" even when foo itself isn't an array, and does not hold the data in class instances that can be exposed to the public; one way to do that is to have foo[19] return a struct which holds a reference to 'foo' and the value '19', and has a member property 'x' which could call "foo.SetXValue(19, 9);" Such a struct could have a conversion operator to convert itself to the "apparent" type of foo[19]. In both of these scenarios, storing the value returned by a method or property into a variable and then using it more than once would cause strange behavior. It would be desirable if the designer of the class exposing such methods or properties could ensure that callers wouldn't be able to use them more than once. Is there any practical way to accomplish that?

    Read the article

  • MEF = may experience frustration?

    - by Dave
    Well, it's not THAT bad yet. :) But I do have questions after Reed has pointed me at MEF as a potential alternative to IoC (and so far it does look pretty good). Consider the following model: As you can see, I have an App, and this app uses Plugins (whoops, missed that association!). Both the App and Plugins require usage of an object of type CandySettings, which is found in yet another assembly. I first tried to use the ComposeParts method in MEF, but the only way I could get this to work was to do something like this in the plugin code. var container = new CompositionContainer(); container.ComposeParts(this, new CandySettings()); But this doesn't make any sense, because why would I want to create the instance of CandySettings in the plugin? It should be in the App. But if I put it in the App code, then the Plugin doesn't magically figure out how to get at ICandySettings, even though I am using [Import] in the plugin, and [Export] in CandySettings. The way I did it was to use MEF's DirectoryCatalog, because this allows the plugin, when constructed, to scan all of the assemblies in the current folder and automagically import everything that is marked with the [Import] attribute. So it looks like this, and potentially in every plugin: var catalog = new DirectoryCatalog( "."); var container = new CompositionContainer( catalog); container.ComposeParts( this); This totally works great, but I can't help but think that this is not how MEF was intended to be used?

    Read the article

  • Perl DBI doesn't work with Oracle 11g

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before? /john

    Read the article

  • Command-Line Parsing API from TestAPI library - Type-Safe Commands how to

    - by MicMit
    Library at http://testapi.codeplex.com/ Excerpt of usage from http://blogs.msdn.com/ivo_manolov/archive/2008/12/17/9230331.aspx A third common approach is forming strongly-typed commands from the command-line parameters. This is common for cases when the command-line looks as follows: some-exe COMMAND parameters-to-the-command The parsing in this case is a little bit more involved: Create one class for every supported command, which derives from the Command abstract base class and implements an expected Execute method. Pass an expected command along with the command-line arguments to CommandLineParser.ParseCommand – the method will return a strongly-typed Command instance that can be Execute()-d. // EXAMPLE #3: // Sample for parsing the following command-line: // Test.exe run /runId=10 /verbose // In this particular case we have an actual command on the command-line (“run”), which we want to effectively de-serialize and execute. public class RunCommand : Command { bool? Verbose { get; set; } int? RunId { get; set; } public override void Execute() { // Implement your "run" execution logic here. } } Command c = new RunCommand(); CommandLineParser.ParseArguments(c, args); c.Execute(); ============================ I don't get if we instantiate specific class before parsing arguments , what's the point of command line argument "run" which is very first one. I thought the idea was to instantiate and execute command/class based on a command line parameter ( "run" parameter becomes instance RunCommand class, "walk" becomes WalkCommand class and so on ). Can it be done with the latest version ?

    Read the article

  • Need help troubleshooting why Solr wont start (or why solr admin page wont show)

    - by Camran
    I can't get Solr working. I have Jetty, and my server OS is Ubuntu 9.10. It is a VPS server. So, when I execute the java -jar start.jar everything seems fine. I even do a netstat to check if there are any listeners on the port before the start and after the start, and it seems solr is starting. However, I cant access the admin page. I have even turned off the firewall. Here is some info about my server: I have changed DocumentRoot to var/www/SV/ I have Apache2, PHP5, MySql installed I have "disabled" iptables firewall I have removed the htaccess files (I used them to passw protect my site under develop) I have installed JRE (NOT JDK) on my server. I use the "example" which comes with Solr, so I use Jetty as container on my Server. My Server has 768MB RAM Doing a java -version command shows this: java version "1.6.0_15" Java(TM) SE Runtime Environment (build 1.6.0_15-b03) Java HotSpot(TM) Client VM (build 14.1-b02, mixed mode) And in the terminal the last lines when executing start.jar is: May 29, 2010 1:30:03 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@1dc64a5 main NOTE: Also before this last line, there is a line which makes me suspicious: Started SocketConnector @ 0.0.0.0:8983 // Should this be with leading zeros? Is there any ways you know to troubleshoot this? Memory issue maybe? Thanks

    Read the article

  • Books and resources for Java Performance tuning - when working with databases, huge lists

    - by Arvind
    Hi All, I am relatively new to working on huge applications in Java. I am working on a Java web service which is pretty heavily used by various clients. The service basically queries the database (hibernate) and then works with a lot of Lists (there are adapters to convert list returned from DB to the interface which the service publishes) and I am seeing lot of issues with the service like high CPU usage or high heap space. While I can troubleshoot the performance issues using a profiler, I want to actually learn about what all I need to take care when I actually write code. Like what kind of List to use or things like using StringBuilder instead of String, etc... Is there any book or blogs which I can refer which will help me while I write new services? Also my application is multithreaded - each service call from a client is a new thread, and I want to know some best practices around that area as well. I did search the web but I found many tips which are not relevant in the latest Java 6 releases, so wanted to know what kind of resources would help a developer starting out now on Java for heavily used applications. Arvind

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • Using Hibernate's ScrollableResults to slowly read 90 million records

    - by at
    I simply need to read each row in a table in my MySQL database using Hibernate and write a file based on it. But there are 90 million rows and they are pretty big. So it seemed like the following would be appropriate: ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY); while (results.next()) storeInFile(results.get()[0]); The problem is the above will try and load all 90 million rows into RAM before moving on to the while loop... and that will kill my memory with OutOfMemoryError: Java heap space exceptions :(. So I guess ScrollableResults isn't what I was looking for? What is the proper way to handle this? I don't mind if this while loop takes days (well I'd love it to not). I guess the only other way to handle this is to use setFirstResult and setMaxResults to iterate through the results and just use regular Hibernate results instead of ScrollableResults. That feels like it will be inefficient though and will start taking a ridiculously long time when I'm calling setFirstResult on the 89 millionth row... UPDATE: setFirstResult/setMaxResults doesn't work, it turns out to take an unusably long time to get to the offsets like I feared. There must be a solution here! Isn't this a pretty standard procedure?? I'm willing to forgo Hibernate and use JDBC or whatever it takes. UPDATE 2: the solution I've come up with which works ok, not great, is basically of the form: select * from person where id > <offset> and <other_conditions> limit 1 Since I have other conditions, even all in an index, it's still not as fast as I'd like it to be... so still open for other suggestions..

    Read the article

  • MySQL performance

    - by kapil.israni
    Hi, I have this LAMP application with about 900k rows in MySQL and I am having some performance issues. Background - Apart from the LAMP stack , there's also a Java process (multi-threaded) that runs in its own JVM. So together with LAMP & java, they form the complete solution. The java process is responsible for inserts/updates and few selects as well. These inserts/updates are usually in bulk/batch, anywhere between 5-150 rows. The PHP front-end code only does SELECT's. Issue - the PHP/SELECT queries become very slow when the java process is running. When the java process is stopped, SELECT's perform alright. I mean the performance difference is huge. When the java process is running, any action performed on the php front-end results in 80% and more CPU usage for mysqld process. Any help would be appreciated. MySQL is running with default parameters & settings. Software stack - Apache - 2.2.x MySQL -5.1.37-1ubuntu5 PHP - 5.2.10 Java - 1.6.0_15 OS - Ubuntu 9.10 (karmic)

    Read the article

  • Get most left|right|top|bottom point contained in box

    - by skyman
    I'm storing Points Of Interest (POI) in PostgreSQL database, and retrieve them via PHP script to Android application. To reduce internet usage I want my mobile app to know if there are any points in the neighborhood of currently displayed area. My idea is to store bounds of the rectangle containing all points already retrieved (in other words: nearest point on the left (West) of most west already retrieved, nearest point above (North) of most north already retrieved etc.) and I will make next query when any edge of screen goes outside of this bounds. Currently I can retrieve points which are in "single screen" (in the area covered by currently displayed map) using: SELECT * FROM ch WHERE loc <@ (box '((".-$latSpan.", ".$lonSpan."),(".$latSpan.", ".-$lonSpan."))' + point '".$loc."') Now I need to know four most remote points in each direction, than I will be able to retrieve next four "more remote" points. Is there any possibility to get those points (or box) directly from PostgreSQL (maybe using some "aggregate points to box" function)?

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • How should I handle persistence in a Java MUD? OptimisticLockException handling

    - by Chase
    I'm re-implementing a old BBS MUD game in Java with permission from the original developers. Currently I'm using Java EE 6 with EJB Session facades for the game logic and JPA for the persistence. A big reason I picked session beans is JTA. I'm more experienced with web apps in which if you get an OptimisticLockException you just catch it and tell the user their data is stale and they need to re-apply/re-submit. Responding with "try again" all the time in a multi-user game would make for a horrible experience. Given that I'd expect several people to be targeting a single monster during a fight I think the chance of an OptimisticLockException would be high. My view code, the part presenting a telnet CLI, is the EJB client. Should I be catching the PersistenceExceptions and TransactionRolledbackLocalExceptions and just retrying? How do you decide when to stop? Should I switch to pessimistic locking? Is persisting after every user command overkill? Should I be loading the entire world in RAM and dumping the state every couple of minutes? Do I make my session facade a EJB 3.1 singleton which would function as a choke point and therefore eliminating the need to do any type of JPA locking? EJB 3.1 singletons function as a multiple reader/single writer design (you annotate the methods as readers and writers). Basically, what is the best design and java persistence API for highly concurrent data changes in an application where it is not acceptable to present resubmit/retry prompts to the user?

    Read the article

  • problem disposing class in Dictionary it is Still in the heap memory although using GC.Collect

    - by Bahgat Mashaly
    Hello i have a problem disposing class in Dictionary this is my code private Dictionary<string, MyProcessor> Processors = new Dictionary<string, MyProcessor>(); private void button1_Click(object sender, EventArgs e) { if (!Processors.ContainsKey(textBox1.Text)) { Processors.Add(textBox1.Text, new MyProcessor()); } } private void button2_Click(object sender, EventArgs e) { MyProcessor currnt_processor = Processors[textBox1.Text]; Processors.Remove(textBox2.Text); currnt_processor.Dispose(); currnt_processor = null; GC.Collect(); } public class MyProcessor: IDisposable { private bool isDisposed = false; string x = ""; public MyProcessor() { for (int i = 0; i < 20000; i++) { //this line only to increase the memory usage to know if the class is dispose or not x = x + "gggggggggggg"; } this.Dispose(); GC.SuppressFinalize(this); } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } public void Dispose(bool disposing) { if (!this.isDisposed) { isDisposed = true; this.Dispose(); } } ~MyProcessor() { Dispose(false); } } i use "ANTS Memory Profiler" to monitor heap memory the disposing work only when i remove all keys from dictionary how can i destroy the class from heap memory ? thanks in advance

    Read the article

  • using ghostscript in server mode to convert pdfs to pngs

    - by emh
    while i am able to convert a specific page of a PDF to a PNG like so: gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -dGraphicsAlphaBits=4 -sOutputFile=gymnastics-20.png -dFirstPage=20 -dLastPage=20 gymnastics.pdf i am wondering if i can somehow use ghostscript's JOBSERVER mode to process several conversions without having to incur the cost of starting up ghostscript each time. from: http://pages.cs.wisc.edu/~ghost/doc/svn/Use.htm -dJOBSERVER Define \004 (^D) to start a new encapsulated job used for compatibility with Adobe PS Interpreters that ordinarily run under a job server. The -dNOOUTERSAVE switch is ignored if -dJOBSERVER is specified since job servers always execute the input PostScript under a save level, although the exitserver operator can be used to escape from the encapsulated job and execute as if the -dNOOUTERSAVE was specified. This also requires that the input be from stdin, otherwise an error will result (Error: /invalidrestore in --restore--). Example usage is: gs ... -dJOBSERVER - < inputfile.ps -or- cat inputfile.ps | gs ... -dJOBSERVER - Note: The ^D does not result in an end-of-file action on stdin as it may on some PostScript printers that rely on TBCP (Tagged Binary Communication Protocol) to cause an out-of-band ^D to signal EOF in a stream input data. This means that direct file actions on stdin such as flushfile and closefile will affect processing of data beyond the ^D in the stream. the idea is to run ghostscript in-process. the script would receive a request for a particular page of a pdf and would use ghostscript to generate the specified image. i'd rather not start up a new ghostscript process every time.

    Read the article

  • Java and tomcat vs ASP.NET and IIS

    - by Mark Cooper
    Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)... My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"... ...then I cam across a product called Jira from atlasian and this has stopped me in my tracks... This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc... I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM). Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat. Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS??? Thanks, Mark NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)

    Read the article

  • Most efficient way of creating tree from adjacency list

    - by Jeff Meatball Yang
    I have an adjacency list of objects (rows loaded from SQL database with the key and it's parent key) that I need to use to build an unordered tree. It's guaranteed to not have cycles. This is taking wayyy too long (processed only ~3K out of 870K nodes in about 5 minutes). Running on my workstation Core 2 Duo with plenty of RAM. Any ideas on how to make this faster? public class StampHierarchy { private StampNode _root; private SortedList<int, StampNode> _keyNodeIndex; // takes a list of nodes and builds a tree // starting at _root private void BuildHierarchy(List<StampNode> nodes) { Stack<StampNode> processor = new Stack<StampNode>(); _keyNodeIndex = new SortedList<int, StampNode>(nodes.Count); // find the root _root = nodes.Find(n => n.Parent == 0); // find children... processor.Push(_root); while (processor.Count != 0) { StampNode current = processor.Pop(); // keep a direct link to the node via the key _keyNodeIndex.Add(current.Key, current); // add children current.Children.AddRange(nodes.Where(n => n.Parent == current.Key)); // queue the children foreach (StampNode child in current.Children) { processor.Push(child); nodes.Remove(child); // thought this might help the Where above } } } } public class StampNode { // properties: int Key, int Parent, string Name, List<StampNode> Children }

    Read the article

  • How to map IDictionary<string, object> in Fluent NHibernate?

    - by user298221
    I am looking to persist user preferences into a collection of name value pairs, where the value may be an int, bool, or string. There are a few ways to skin this cat, but the most convenient method I can think of is something like this: public class User { public virtual IDictionary<string, object> Preferences { get; set; } } with its usage as: user.Preferences["preference1"] = "some value"; user.Preferences["preference2"] = 10; user.Preferences["preference3"] = true; var pref = (int)user.Preferences["preference2"]; I'm not sure how to map this in Fluent NHibernate, though I do think it is possible. Generally, you would map a simpler Dictionary<string, string> as: HasMany(x => x.Preferences) .Table("Preferences") .AsMap("preferenceName") .Element("preferenceValue"); But with a type of 'object', NHibernate doesn't know how to deal with it. I imagine a custom UserType could be created that breaks an 'object' down to a string representing its Type and a string representing the value. We would have a table that looks kind of like this: Table Preferences userId (int) preferenceName (varchar) preferenceValue (varchar) preferenceValueType (varchar) and the hibernate mapping would like this: <map name="Preferences" table="Preferences"> <key column="userId"></key> <index column="preferenceName" type="String" /> <element type="ObjectAsStringUserType, Assembly"> <column name="preferenceValue" /> <column name="preferenceValueType"/> </element> </map> I'm not sure how you would map this in Fluent NHibernate. Maybe there's a better way to do this, or maybe I should just suck it up and use IDictionary<string, string>. Any ideas?

    Read the article

  • Android: ListActivity design - changing the content of the List Adapter

    - by Rob
    Hi, I would like to write a rather simple content application which displays a list of textual items (along with a small pic). I have a standard menu in which each menu item represents a different category of textual items (news, sports, leisure etc.). Pressing a menu item will display a list of textual items of this category. Now, having a separate ListActivity for each category seems like an overkill (or does it?..) Naturally, it makes much more sense to use one ListActivity and replace the data of its adapter when each category is loaded. My concern is when "back" is pressed. The adapter is loaded with items of the current category and now I need to display list of the previous category (and enable clicking on list items too...). Since I have only one activity - I thought of backup and load mechanism in onPause() and onResume() functions as well as making some distinction whether these function are invoked as a result of a "new" event (menu item selected) or by a "back" press. This seems very cumbersome for such a trivial usage... Am I missing something here? Thanks, Rob

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

  • java RMI newbie-- some basic questions about SSL and auth/.rate limiting an RMI service

    - by Arvind
    I am trying to work to secure a java based RMI service using SSL. I have some basic questions about the capabilities of using SSL. Specifically, from what I understand, the client and server connecting via SSL will need to have appropriate credential certificates in both client and server, for a client to be granted access to the server. Am I correct in my understanding? Also, what I want to know is, can a person who is already using my RMI service and has access to a client machine , make a copy of the certificate in the client machine to other client machines-- and then invoke my RMI service from those other machines as well? How do I prevent such a situation from occurring? I mean, in a REST API you can use OAuth authentication, can we have some kind of authentication in an RMI Service? Also, can I possibly limit usage of the RMI service? For eg, a specific client may be allowed to make only 5000 calls per day to my RMI service, and if he makes more calls the calls occurring after the 5000 calls limit are all denied? How do I do such rate limiting and/or authentication for my RMI Service?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >