Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 493/555 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • NSCollectionView: Can the same object not be in the array more than once or is this a bug?

    - by Sean
    I may be doing this all wrong, but I thought I was on the right track until I hit this little snag. Basically I was putting together a toy using NSCollectionView and trying to understand how to hook that all up using IB. I have a button which will add a couple of strings to the NSArrayController: The first time I press this button, my strings appear in the collection view as expected: The second time I press the button, the views scroll down and room is made - but the items don't appear to get added. I just see blank space: The button is implemented as follows (controller is a pointer to the NSArrayController I added in IB): - (IBAction)addStuff:(id)control { [controller addObjects:[NSArray arrayWithObjects:@"String 1",@"String 2",@"String 3",nil]]; } I'm not sure what I'm doing wrong. Rather than try to explain all the connections/binds/etc, if you need more info, I'd be grateful if you could just take a quick look at the toy project itself. UPDATE: After more experimentation as suggested by James Williams, it seems the problem stems from having multiple objects with the same memory address in the array. This confuses either NSArrayController or NSCollectionView (not sure which). Changing my addStuff: to this resulted in the behavior I originally expected: [controller addObjects:[NSArray arrayWithObjects:[NSMutableString stringWithString:@"String 1"],[NSMutableString stringWithString:@"String 2"],[NSMutableString stringWithString:@"String 3"],nil]]; So the question now, I guess, is if this is a bug I should report to Apple or if this is intended/documented behavior and I just missed it?

    Read the article

  • How the dispose() method works in C#.net?

    - by Shailesh Jaiswal
    I am developing smart device application in C#. It is a window application. In that application I have created the 4 to 5 window form. I am navigating in these forms from one form to another form by using linklabel control in C#. In linklabel_Click() method which I am using to navigate I am using the code form1.show() according to need. I read that form1.show() method automatically calls the form1.dispose() method on the from1. I also read that once we dispose the form it is removed from memory & we can not call it again. But in my application no one form gets disposed. I can see all the form even after calling the form1.show() method. when I use the link to go once again to from1 it does not get disposed. Is anything wrong in my concept? I am new in C#. Please tell me how the dispose method work in above context? What is the use of dispose method. It will be better if you describe me above issue with example.

    Read the article

  • Old dll.config problem !

    - by user313421
    Since 2005 as I googled it's a problem for who needs to read the configuration of an assembly from it's config file "*.dll.config" and Microsoft didn't do anything yet. Story: If you try to read a setting from a class library (plug-in) you fail. Instead the main application domain (EXE which is using the plug-in) config is read and because probably there's not such a config your plug-in will use default setting which is hard-coded when you create it's settings for first time. Any change to .dll.config wouldn't see by your plug-in and you wonder why it's there! If you want to replace it and start searching you may find something like this: http://stackoverflow.com/questions/594298/c-dll-config-file But just some ideas and one line code. A good replacement for built-in config shouldn't read from file system each time we need a config value, so we can store them in memory; Then what if user changes config file ? we need a FileSystemWatcher and we need some design like singleton ... and finally we are at the same point configuration of .NET is except our one's working. It seems MS did everything but forgot why they built the ".dll.config". Since no DLL is gonna execute by itself, they are referenced from other apps (even if used in web) and so why there's such a "*.dll.config" file ? I'm not gonna argue if it's good to have multiple config files or not. It's my design (plug-able components). Finally { After these years, is there any good practice such as a custom setting class to add in each assemly and read from it's own config file ? }

    Read the article

  • Pointers, am I doing them correctly? Objective-c/cocoa

    - by Chris
    I have this in my @interface struct track currentTrack; struct track previousTrack; int anInt; Since these are not objects, I do not have to have them like int* anInt right? And if setting non-object values like ints, boolean, etc, I do not have to release the old value right (assuming non-GC environment)? The struct contains objects: typedef struct track { NSString* theId; NSString* title; } *track; Am I doing that correctly? Lastly, I access the struct like this: [currentTrack.title ...]; currentTrack.theId = @"asdf"; //LINE 1 I'm also manually managing the memory (from a setter) for the struct like this: [currentTrack.title autorelease]; currentTrack.title = [newTitle retain]; If I'm understanding the garbage collection correctly, I should be able to ditch that and just set it like LINE 1 (above)? Also with garbage collection, I don't need a dealloc method right? If I use garbage collection does this mean it only runs on OS 10.5+? And any other thing I should know before I switch to garbage collected code? Sorry there are so many questions. Very new to objective-c and desktop programming. Thanks

    Read the article

  • Using JRE 1.5, still maven says annotation not supported in -source 1.3

    - by Abhijeet
    Hi, I am using JRE 1.5. Still when I try to compile my code it fails by saying to use JRE 1.5 instead of 1.3 C:\temp\SpringExamplemvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building SpringExample [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] Deleting directory C:\temp\SpringExample\target [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 6 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 6 source files to C:\temp\SpringExample\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.BuildFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:516) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Wed Dec 22 10:04:53 IST 2010 [INFO] Final Memory: 9M/16M [INFO] ------------------------------------------------------------------------ C:\temp\SpringExamplejavac -version javac 1.5.0_08 javac: no source files

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • Can I get rid of this read lock?

    - by Pieter
    I have the following helper class (simplified): public static class Cache { private static readonly object _syncRoot = new object(); private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>(); public static void Add(Type type, string value) { lock (_syncRoot) { _lookup.Add(type, value); } } public static string Lookup(Type type) { string result; lock (_syncRoot) { _lookup.TryGetValue(type, out result); } return result; } } Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock. How do you normally get rid of the read lock in this situation? I have the following ideas: Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation; Move to COW semantics. public static void Add(Type type, string value) { lock (_syncRoot) { var lookup = new Dictionary<Type, string>(_lookup); lookup.Add(type, value); _lookup = lookup; } } (With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.) Using ReaderWriterLock. I believe this would make things worse since the region being locked is small. Suggestions are very welcome.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • Passing in a lambda to a Where statement

    - by sonicblis
    I noticed today that if I do this: var items = context.items.Where(i => i.Property < 2); items = items.Where(i => i.Property > 4); Once I access the items var, it executes only the first line as the data call and then does the second call in memory. However, if I do this: var items = context.items.Where(i => i.Property < 2).Where(i => i.Property > 4); I get only one expression executed against the context that includes both where statements. I have a host of variables that I want to use to build the expression for the linq lambda, but their presence or absence changes the expression such that I'd have to have a rediculous number of conditionals to satisfy all cases. I thought I could just add the Where() statements as in my first example above, but that doesn't end up in a single expression that contains all of the criteria. Therefore, I'm trying to create just the lambda itself as such: //bogus syntax if (var1 == "something") var expression = Expression<Func<item, bool>>(i => i.Property == "Something); if (var2 == "somethingElse") expression = expression.Where(i => i.Property2 == "SomethingElse"); And then pass that in to the where of my context.Items to evaluate. A) is this right, and B) if so, how do you do it?

    Read the article

  • How to Perform Continues Iteration over Shared Dictionary in Multi-threaded Environment

    - by Mubashar Ahmad
    Dear Gurus. Note Pls do not tell me regarding alternative to custom session, Pls answer only relative to the Pattern Scenario I have Done Custom Session Management in my application(WCF Service) for this I have a Dictionary shared to all thread. When a specific function Gets called I add a New Session and Issue SessionId to the client so it can use that sessionId for rest of his calls until it calls another specific function, which terminates this session and removes the session from the Dictionary. Due to any reason Client may not call session terminator function so i have to implement time expiration logic so that i can remove all such sessions from the memory. For this I added a Timer Object which calls ClearExpiredSessions function after the specific period of time. which iterates on the dictionary. Problem: As this dictionary gets modified every time new client comes and leaves so i can't lock the whole dictionary while iterating over it. And if i do not lock the dictionary while iteration, if dictionary gets modified from other thread while iterating, Enumerator will throw exception on MoveNext(). So can anybody tell me what kind of Design i should follow in this case. Is there any standard pattern available.

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • How to efficiently convert String, integer, double, datetime to binary and vica versa?

    - by Ben
    Hi, I'm quite new to C# (I'm using .NET 4.0) so please bear with me. I need to save some object properties (their properties are int, double, String, boolean, datetime) to a file. But I want to encrypt the files using my own encryption, so I can't use FileStream to convert to binary. Also I don't want to use object serialization, because of performance issues. The idea is simple, first I need to somehow convert objects (their properties) to binary (array), then encrypt (some sort of xor) the array and append it to the end of the file. When reading first decrypt the array and then somehow convert the binary array back to object properties (from which I'll generate objects). I know (roughly =) ) how to convert these things by hand and I could code it, but it would be useless (too slow). I think the best way would be just to get properties' representation in memory and save that. But I don't know how to do it using C# (maybe using pointers?). Also I though about using MemoryStream but again I think it would be inefficient. I am thinking about class Converter, but it does not support toByte(datetime) (documentation says it always throws exception). For converting back I think the only options is class Converter. Note: I know the structure of objects and they will not change, also the maximum String length is also known. Thank you for all your ideas and time. EDIT: I will be storing only parts of objects, in some cases also parts of different objects (a couple of properties from one object type and a couple from another), thus I think that serialization is not an option for me.

    Read the article

  • How can I improve the performance of this double-for print?

    - by Florenc
    I have the following static method that prints the data imported from a 40.000 lines .xls spreadsheet. Now, it takes about 27 seconds to print the data in the console and the memory consumption is huge. import org.apache.poi.hssf.usermodel.*; import org.apache.poi.ss.usermodel.*; public static void printSheetData(List<List<HSSFCell>> sheetData) { for (int i = 0; i < sheetData.size(); i++) { List<HSSFCell> list = (List<HSSFCell>) sheetData.get(i); for (int j = 0; j < list.size(); j++) { HSSFCell cell = (HSSFCell) list.get(j); System.out.print(cell.toString()); if (j < list.size() - 1) { System.out.print(", "); } } System.out.println(""); } } Disclaimer: I know, I know large data belong to a database, don't print output in the console, premature optimization is the root of all evils...

    Read the article

  • Java Concurrency : Volatile vs final in "cascaded" variables?

    - by Tom
    Hello Experts, is final Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); the same as volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); in case the inner Map is accessed by different Threads? or is even something like this required: volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); volatile Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); In case the it is NOT a "cascaded" map, final and volatile have in the end the same effect of making shure that all threads see always the correct contents of the Map... But what happens if the Map iteself contains a map, as in the example... How do I make shure that the inner Map is correctly "Memory barriered"? Tanks! Tom

    Read the article

  • Create swipe controlled simple flipbook style animation in ObjC

    - by eco_bach
    Hi I am a beginner in Obj C development, though quite experienced (over 10 years) with other ECMAscript based languages and OOP development. I want to build a simple flipbook style animation, controlled through swiping motion. I'm sure extremely simple for any advanced ObjC coders. Can anyone with extensive ObjC-CocoaTouch experience give me some higher level recommendations? ie, 1 -general application design, should I start with a simple view based application, or navigation based or? 2 -should I use 3rd party animation frameworks such as Cocos2D, or stick with built in classes and methods? 3 -if using built in methods, classes, what is the recommended way of achieving a animation, that will be controlled via swipe and touch gestures? 4 -I want to eventually have multiple 'flipbooks' that I can 'instantly' swap with one another, ie to give the net effect of an object changing color, etc, but not sure how to approach this from a memory management point of view, related to #1 above Except for point 3 above, I'm not expecting any actual code examples. Just general guidelines to follow and perhaps, what are some next steps I should take in my goal as an ObjC code samurai.

    Read the article

  • Failure remediation strategy for File I/O

    - by Brett
    I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

    Read the article

  • What does the windbg command "kd" do?

    - by Oskar
    I ran kd by mistake and got some output that inteerested me, a reference to a line of code in my module that I can't see on the call stack of any thread. The lines weren't the beginnning of the method so I don't think the reference is to a function pointer, but possibly the result of an exception being stored in memory??? Of course, that happens to be what I'm looking for... Update: The stack trace of the exception is: 0:000> kb *** Stack trace for last set context - .thread/.cxr resets it ChildEBP RetAddr Args to Child 0174f168 734ea84f 2cb9e950 00000000 2cb9e950 kernel32!LoadTimeZoneInformation+0x2b 0174f1c4 734ead92 00000022 00000001 000685d0 msvbvm60! RUN_INSTMGR::ExecuteInitTerm+0x178 0174f1f8 734ea9ee 00000000 0000002f 2dbc2abc msvbvm60! RUN_INSTMGR::CreateObjInstanceWithParts+0x1e4 0174f278 7350414e 2cb9e96c 00000000 0174f2f0 msvbvm60! RUN_INSTMGR::CreateObjInstance+0x14d 0174f2e4 734fa071 00000000 2cb9e96c 0174f2fc msvbvm60!RcmConstructObjectInstance+0x75 0174f31c 00976ef1 2cb9e950 00591bc0 0174fddc msvbvm60!__vbaNew+0x21 and into our code (create a new Form derived class) the dds output: 0:000> dds esp-0x40 esp+0x100 0174f05c 00000000 0174f060 00000000 0174f064 00000000 0174f068 00000000 0174f06c 00000000 0174f070 00000000 0174f074 00000000 0174f078 00000000 0174f07c 00000000 0174f080 00000000 0174f084 00000000 0174f088 00000000 0174f08c 00000000 0174f090 00000000 0174f094 00000000 0174f098 00000000 0174f09c 007f4f9b ourDll!formDerivedClass::Form_Initialize+0x10b [C:\Buildbox\formDerivedClass.frm @ 1452] etc which seems to indicate that Initialize is being called even though it isn't on the stack trace of either this exception or any of the threads. As suggested, it might all be a mismatch between pdbs and dlls, but it seems a coincidence that we end up in the right classes and methods

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • How memset initializes an array of integers by -1?

    - by haccks
    The manpage says about memset: #include <string.h> void *memset(void *s, int c, size_t n) The memset() function fills the first n bytes of the memory area pointed to by s with the constant byte c. It is clear that memset can't be used to initialize int array as shown below: int a[10]; memset(a, 1, sizeof(a)); it is because int is represented by 4 bytes (say) and one can not get the desired value for the integers in array a. But I often see the programmers use memset to set the int array elements to either 0 or -1. int a[10]; int b[10]; memset(a, 0, sizeof(a)); memset(b, -1, sizeof(b)); As per my understanding, initializing with integer 0 is OK because 0 can be represented in 1 byte (may be I am wrong in this context). But how it is possible to initialize b with -1 (a 4 bytes value)?

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >