Search Results

Search found 9132 results on 366 pages for 'convert'.

Page 342/366 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • How to handle float values in a plist

    - by Banjer
    I'm reading in a plist from a web server, generated with some php. When I read that into an NSArray in my iphone app, and then spit the NSArray out with NSLog to check it out, I see that the float values are treated as strings. I would like the "distance" values to be treated as numeric and not strings. This plist is displayed in a table view where it can be sorted by distance, but the problem is is distance is sorted as a string, so I get some funny sorting results. Can I convert the distance values to float from string in the NSArray? Or maybe theres a simpler solution like tweaking the plist definition, or maybe something in the NSMutableURLRequest code? My plist looks like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <array> <dict> <key>name</key> <string>Pizza Joint</string> <key>distance</key> <string>2.1</string> </dict> <dict> <key>name</key> <string>Burger Kang</string> <key>distance</key> <string>5</string> </dict> </array> </plist> After reading it into an NSArray, it looks like this per NSLog: result: ( { distance = "2.1"; name = "Pizza Joint"; }, { distance = 5; name = "Burger Kang"; } ) Here is the Objective-C code that retrieves the plist: // Set up url request // postData and postLength are left out, but I can post in this question if needed. NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://mysite.com/get_plist.php"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; NSError *error; NSURLResponse *response; NSData *result = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString *string = [[NSString alloc] initWithData:result encoding:NSUTF8StringEncoding]; // libraryContent is an NSArray self.libraryContent = [string propertyList]; NSLog(@"result: %@", self.libraryContent);

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • converting code from not CPS to CPS (CPS aka Continuation Passing Style aka Continuations)

    - by Delirium tremens
    before: function sc_startSiteCompare(){ var visitinguri; var validateduri; var downloaduris; var compareuris; var tryinguri; sc_setstatus('started'); visitinguri = sc_getvisitinguri(); validateduri = sc_getvalidateduri(visitinguri); downloaduris = new Array(); downloaduris = sc_generatedownloaduris(validateduri); compareuris = new Array(); compareuris = sc_generatecompareuris(validateduri); tryinguri = 0; sc_finishSiteCompare(downloaduris, compareuris, tryinguri); } function sc_getvisitinguri() { var visitinguri; visitinguri = content.location.href; return visitinguri; } after (I'm trying): function sc_startSiteCompare(){ var visitinguri; sc_setstatus('started'); visitinguri = sc_getvisitinguri(sc_startSiteComparec1); } function sc_startSiteComparec1 (visitinguri) { var validateduri; validateduri = sc_getvalidateduri(visitinguri, sc_startSiteComparec2); } function sc_startSiteComparec2 (visitinguri, c) { var downloaduris; downloaduris = sc_generatedownloaduris(validateduri, sc_startSiteComparec3); } function sc_startSiteComparec3 (validateduri, c) { var compareuris; compareuris = sc_generatecompareuris(downloaduris, validateduri, sc_startSiteComparec4); } function sc_startSiteComparec4 (downloaduris, compareuris, validateduri, c) { var tryinguri; tryinguri = 0; sc_finishSiteCompare(downloaduris, compareuris, tryinguri); } function sc_getvisitinguri(c) { var visitinguri; visitinguri = content.location.href; c(visitinguri); } What should the code above become? I need CPS, because I have XMLHttpRequests when validating uris, then downloading pages, but I can't use return statements, because I use asynchronous calls. Is there an alternative to CPS? Also, I'm having to pass lots of arguments to functions now. global in procedural code look like this / self in modular code. Any difference? Will I really have to convert from procedural to modular too? It's looking like a lot of work ahead.

    Read the article

  • How can I get image data from QTKit without color or gamma correction in Snow Leopard?

    - by Nick Haddad
    Since Snow Leopard, QTKit is now returning color corrected image data from functions like QTMovies frameImageAtTime:withAttributes:error:. Given an uncompressed AVI file, the same image data is displayed with larger pixel values in Snow Leopard vs. Leopard. Currently I'm using frameImageAtTime to get an NSImage, then ask for the tiffRepresentation of that image. After doing this, pixel values are slightly higher in Snow Leopard. For example, a file with the following pixel value in Leopard: [0 180 0] Now has a pixel value like: [0 192 0] Is there any way to ask a QTMovie for video frames that are not color corrected? Should I be asking for a CGImageRef, CIImage, or CVPixelBufferRef instead? Is there a way to disable color correction altogether prior to reading in the video files? I've attempted to work around this issue by drawing into a NSBitmapImageRep with the NSCalibratedColroSpace, but that only gets my part of the way there: // Create a movie NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys : nsFileName, QTMovieFileNameAttribute, [NSNumber numberWithBool:NO], QTMovieOpenAsyncOKAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsAttribute, [NSNumber numberWithBool:NO], QTMovieLoopsBackAndForthAttribute, (id)nil]; _theMovie = [[QTMovie alloc] initWithAttributes:dict error:&error]; // .... NSMutableDictionary *imageAttributes = [NSMutableDictionary dictionary]; [imageAttributes setObject:QTMovieFrameImageTypeNSImage forKey:QTMovieFrameImageType]; [imageAttributes setObject:[NSArray arrayWithObject:@"NSBitmapImageRep"] forKey: QTMovieFrameImageRepresentationsType]; [imageAttributes setObject:[NSNumber numberWithBool:YES] forKey:QTMovieFrameImageHighQuality]; NSError* err = nil; NSImage* image = (NSImage*)[_theMovie frameImageAtTime:frameTime withAttributes:imageAttributes error:&err]; // copy NSImage into an NSBitmapImageRep (Objective-C) NSBitmapImageRep* bitmap = [[image representations] objectAtIndex:0]; // Draw into a colorspace we know about NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:getWidth() pixelsHigh:getHeight() bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSCalibratedRGBColorSpace bitmapFormat:0 bytesPerRow:(getWidth() * 4) bitsPerPixel:32]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmap draw]; [NSGraphicsContext restoreGraphicsState]; This does convert back to a 'Non color corrected' colorspace, but the color values NOT are exactly the same as what is stored in the Uncompressed AVI files we are testing with. Also this is much less efficient because it is converting from RGB - "Device RGB" - RGB. Also, I am working in a 64-bit application, so dropping down to the Quicktime-C API is not an option. Thanks for your help.

    Read the article

  • Using XStream to deserialize an XML response with separate "success" and "failure" forms?

    - by Chris Markle
    I am planning on using XStream with Java to convert between objects and XML requests and XML responses and objects, where the XML is flowing over HTTP/HTTPS. On the response side, I can get a "successful" response, which seems like it would map to one Java class, or a "failure" response, which seems like it would map to another Java class. For example, for a "file list" request, I could get an affirmative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>true</success> <files> <file>[...]</file> <file>[...]</file> <file>[...]</file> </files> </response> or I could get a negative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>false</success> <error> <errorCode>-502</errorCode> <systemMessage>[...]AuthenticationException</systemMessage> <userMessage>Not authenticated</userMessage> </error> </response> To handle this, should I include fields in one class for both cases or should I somehow use XStream to "conditionally" create one of the two potential classes? The case with fields from both response cases in the same object would look something like this: Class Response { boolean success; ArrayList<File> files; ResponseError error; [...] } Class File { String name; long size; [...] } Class ResponseError { int errorCode; String systemMessage; String userMessage; [...] } I don't know what the "use XStream and create different objects in case of success or error" looks like. Is it possible to do that somehow? Is it better or worse way to go? Anyway, any advice on how to handle using XStream to deal with this success vs. failure response case would be appreciated. Thanks in advance!

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Cannot populate form with ajax and populate jquery plugin

    - by Azriel_
    I'm trying to populate a form with jquery's populate plugin, but using $.ajax The idea is to retrieve data from my database according to the id in the links (ex of link: get_result_edit.php?id=34), reformulate it to json, return it to my page and fill up the form up with the populate plugin. But somehow i cannot get it to work. Any ideas: here's the code: $('a').click(function(){ $('#updatediv').hide('slow'); $.ajax({ type: "GET", url: "get_result_edit.php", success: function(data) { var $response=$(data); $('#form1').populate($response); } }); $('#updatediv').fadeIn('slow'); return false; whilst the php file states as follow: <?php $conn = new mysqli('localhost', 'XXXX', 'XXXXX', 'XXXXX'); @$query = 'Select * FROM news WHERE id ="'.$_GET['id'].'"'; $stmt = $conn->query($query) or die ($mysql->error()); if ($stmt) { $results = $stmt->fetch_object(); // get database data $json = json_encode($results); // convert to JSON format echo $json; } ?> Now first thing is that the mysql returns a null in this way: is there something wrong with he declaration of the sql statement in the $_GET part? Second is that even if i put a specific record to bring up, populate doesn't populate. Update: I changed the populate library with the one called "PHP jQuery helper functions" and the difference is that finally it says something. finally i get an error saying NO SUCH ELEMENT AS i wen into the library to have a look and up comes the following function function populateFormElement(form, name, value) { // check that the named element exists in the form var name = name; // handle non-php naming var element = form[name]; if(element == undefined) { debug('No such element as ' + name); return false; } // debug options if(options.debug) { _populate.elements.push(element); } } Now looking at it one can see that it should print out also the name, but its not printing it out. so i'm guessing that retrieving the name form the json is not working correctly. Link is at http://www.ocdmonline.org/michael/edit%5Fnews.php with username: Testing and pass:test123 Any ideas?

    Read the article

  • Linq to SQL Repository ~theory~ - Generic but now uses Linq to Objects?

    - by Matt Tolliday
    The project I am currently working on used Linq to SQL as an ORM data access technology. Its an MVC3 Web app. The problem I faced was primarily due to the inability to mock (for testing) the DataContext which gets autogenerated by the DBML designer. So to solve this issue (after much reading) I refactored the repository system which was in place - single repository with seperate and duplicated access methods for each table which ended up with something like 300 methods only 10 of which were unique - into a single repository with generic methods taking the table and returning more generic types to the upper reaches of the application. My question revolves more around the design I've used to get thus far and the differences I'm noticing in the structure of the app. 1) Having refactored the code from the dark ages which used classic Linq to SQL queries: public Billing GetBilling(int id) { var result = ( from bil in _bicDc.Billings where bil.BillingId == id select bil).SingleOrDefault(); return (result); } it now looks like: public T GetRecordWhere<T>(Expression<Func<T, bool>> predicate) where T : class { T result; try { result = _dataContext.GetTable<T>().Where(predicate).SingleOrDefault(); } catch (Exception ex) { throw ex; } return result; } and is used by the controller with a query along the lines of: _repository.GetRecordWhere<Billing>(x => x.BillingId == 1); which is fine, and precisely what I wanted to achieve. ...however.... I'm also having to do the following to get precisely the result set i require in the controller class (the highest point of the app in essence)... viewModel.RecentRequests = _model.GetAllRecordsWhere<Billing>(x => x.BillingId == 1) .Where(x => x.BillingId == Convert.ToInt32(BillingType.Submitted)) .OrderByDescending(x => x.DateCreated). Take(5).ToList(); This - as far as my understanding is correct - is now using Linq to Objects rather than the Linq to SQL queries I was previously? Is this okay practise? It feels wrong to me but I dont know why. Probably because the logic of the queries is in the very highest tier of the app, rather than the lowest, but... I defer to you good people for advice. One of the issues I considered was bringing the entire table into memory but I understand that using the Iqeryable return type the where clause is taken to the database and evaluated there. Thus returning only the resultset i require... i may be wrong. And if you've made it this far, well done. Thank you, and if you have any advice it is very much appreciated!!

    Read the article

  • how to predict which section have to put in critical section in threading

    - by Lalit Dhake
    Hi , I am using the console application i used multi threading in the same. I just want to know which section have to put inside critical section my code is : .------------------------------------------------------------------------------. public class SendBusReachSMS { public void SchedularEntryPoint() { try { List<ActiveBusAndItsPathInfo> ActiveBusAndItsPathInfoList = BusinessLayer.GetActiveBusAndItsPathInfoList(); if (ActiveBusAndItsPathInfoList != null) { //SMSThreadEntryPoint smsentrypoint = new SMSThreadEntryPoint(); while (true) { foreach (ActiveBusAndItsPathInfo ActiveBusAndItsPathInfoObj in ActiveBusAndItsPathInfoList) { if (ActiveBusAndItsPathInfoObj.isSMSThreadActive == false) { DateTime CurrentTime = System.DateTime.Now; DateTime Bustime = Convert.ToDateTime(ActiveBusAndItsPathInfoObj.busObj.Timing); TimeSpan tsa = Bustime - CurrentTime; if (tsa.TotalMinutes > 0 && tsa.TotalMinutes < 5) { ThreadStart starter = delegate { SMSThreadEntryPointFunction(ActiveBusAndItsPathInfoObj); }; Thread t = new Thread(starter); t.Start(); t.Join(); } } } } } } catch (Exception ex) { Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } public void SMSThreadEntryPointFunction(ActiveBusAndItsPathInfo objActiveBusAndItsPathInfo) { try { //mutThrd.WaitOne(); String consoleString = "Thread for " + objActiveBusAndItsPathInfo.busObj.Number + "\t" + " on path " + "\t" + objActiveBusAndItsPathInfo.pathObj.PathId; Console.WriteLine(consoleString); TrackingInfo trackingObj = new TrackingInfo(); string strTempBusTime = objActiveBusAndItsPathInfo.busObj.Timing; while (true) { trackingObj = BusinessLayer.get_TrackingInfoForSendingSMS(objActiveBusAndItsPathInfo.busObj.Number); if (trackingObj.latitude != 0.0 && trackingObj.longitude != 0.0) { //calculate distance double distanceOfCurrentToDestination = 4.45; TimeSpan CurrentTime = System.DateTime.Now.TimeOfDay; TimeSpan timeLimit = objActiveBusAndItsPathInfo.sessionInTime - CurrentTime; if ((distanceOfCurrentToDestination <= 5) && (timeLimit.TotalMinutes <= 5)) { Console.WriteLine("Message sent to bus number's parents: " + objActiveBusAndItsPathInfo.busObj.Number); break; } } } // mutThrd.ReleaseMutex(); } catch (Exception ex) { //throw; Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } } Please help me in multithreading. new topic for me in .net

    Read the article

  • How to set up a wcf-structure over internet, and not on the localhost

    - by djerry
    Hey guys, I want to convert the wcf-structure i have from localhost to a service which runs over the internet. My server starts when replacing the localhost with my ip-address. But then my clients cannot connect to the server anymore. This is my server setup : static void Main(string[] args) { NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://192.168.10.26"); //_svc = new ServiceHost(typeof(MonitoringSystemService), address); _monSysService = new MonitoringSystemService(); _svc = new ServiceHost(_monSysService, address); publishMetaData(_svc, "http://192.168.10.26"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); } My app.config for the client looks like this : <configuration> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData= "c:\log\Traces.svclog" /> </listeners> </source> </sources> </system.diagnostics> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IMonitoringSystemService" closeTimeout="00:00:10" openTimeout="00:00:10" receiveTimeout="00:10:00" sendTimeout="00:00:10" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="500" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="100000" maxArrayLength="100000" maxBytesPerRead="100000" maxNameTableCharCount="100000" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://192.168.10.26/Monitoring%20Server" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IMonitoringSystemService" contract="IMonitoringSystemService" > <!--name="NetTcpBinding_IMonitoringSystemService"--> <identity> <userPrincipalName value="DJERRYY\djerry" /> </identity> </endpoint> </client> </system.serviceModel> </configuration>

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • ASP.NET - working with GridView Programmatically

    - by JMSA
    I am continuing from this post. After much Googling, I have come up with this code to edit cells programmatically: using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using Ice_Web_Portal.BO; namespace GridView___Test { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { GridView1.DataSource = Course.GetCourses(); GridView1.DataBind(); } protected void GridView1_RowEditing(object sender, GridViewEditEventArgs e) { GridViewRow row = GridView1.Rows[e.NewEditIndex]; GridView1.EditIndex = e.NewEditIndex; GridView1.DataSource = Course.GetCourses(); GridView1.DataBind(); } protected void GridView1_RowUpdating(object sender, GridViewUpdateEventArgs e) { TextBox txtID = (TextBox)GridView1.Rows[e.RowIndex].Cells[1].Controls[0]; TextBox txtCourseCode = (TextBox)GridView1.Rows[e.RowIndex].Cells[2].Controls[0]; TextBox txtCourseName = (TextBox)GridView1.Rows[e.RowIndex].Cells[3].Controls[0]; TextBox txtCourseTextBookCode = (TextBox)GridView1.Rows[e.RowIndex].Cells[4].Controls[0]; Course item = new Course(); item.ID = Convert.ToInt32(txtID.Text); item.CourseCode = txtCourseCode.Text; item.CourseName = txtCourseName.Text; item.TextBookCode = txtCourseTextBookCode.Text; bool success = Course.Update(item); labMessage.Text = success.ToString(); GridView1.EditIndex = -1; GridView1.DataSource = Course.GetCourses(); GridView1.DataBind(); } } } But 2 problems are happening. (1) I need to press command buttons twice to Edit/Update. (2) Changes in the cell values are not updated in the database. I.e. edited cell values are not committing. Can anyone give me a solution?

    Read the article

  • Java - Highest, Lowest and Average

    - by Emily
    Right, so why does Java come up with this error: Exception in thread "main" java.lang.Error: Unresolved compilation problem: Type mismatch: cannot convert from double to int at rainfall.main(rainfall.java:38) From this: public class rainfall { /** * @param args */ public static void main(String[] args) { int[] numgroup; numgroup = new int [12]; ConsoleReader console = new ConsoleReader(); int highest; int lowest; int index; int tempVal; int minMonth; int minIndex; int maxMonth; int maxIndex; System.out.println("Welcome to Rainfall"); // Input (index now 0-based) for(index = 0; index < 12; index = index + 1) { System.out.println("Please enter the rainfall for month " + index + 1); tempVal = console.readInt(); while (tempVal100 || tempVal<0) { System.out.println("The rating must be within 0...100. Try again"); tempVal = console.readInt(); } numgroup[index] = tempVal; } lowest = numgroup[0]; highest = numgroup[0]; int total = 0.0; // Loop over data (using 1 loop) for(index = 0; index < 12; index = index + 1) { int curr = numgroup[index]; if (curr < lowest) { lowest = curr; minIndex = index; } if (curr highest) { highest = curr; maxIndex = index; } total += curr; } float avg = (float)total / numgroup.length; System.out.println("The average monthly rainfall was " + avg); // +1 to go from 0-based index to 1-based month System.out.println("The lowest monthly rainfall was month " + minIndex + 1); System.out.println("The highest monthly rainfall was month " + maxIndex + 1); System.out.println("Thank you for using Rainfall"); } private static ConsoleReader ConsoleReader() { return null; } }

    Read the article

  • i had problem in adding the additional content in my pdf...using asp.net c#

    - by Ayyappan.Anbalagan
    I am converting my data set into a pdf document.My data set contains the product bill details.So,at the top of the pdf i need to added some more content like "my company name & address customer name, date of bill,bill no" Below code i am using to convert into pdf. public static void Exportdata(DataTable dataTable, HttpResponse Response, int val) { //String filename = String.Concat(name, "-", DateTime.Today.Day.ToString(), "/", DateTime.Today.Month.ToString(), "/", DateTime.Today.Year.ToString(), ".pdf"); Document pdfDoc = new Document(PageSize.A4, 30, 30, 40, 25); System.IO.MemoryStream mStream = new System.IO.MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(pdfDoc, mStream); //int cols = 0; //int rows = 0; int cols = dataTable.Columns.Count; int rows = dataTable.Rows.Count; pdfDoc.Open(); iTextSharp.text.Table pdfTable = new iTextSharp.text.Table(cols, rows); pdfTable.BorderWidth = 1; pdfTable.Width = 100; pdfTable.Padding = 1; pdfTable.Spacing = 1; //creating table headers for (int i = 0; i < cols; i++) { Cell cellCols = new Cell(); Font ColFont = FontFactory.GetFont(FontFactory.HELVETICA, 8, Font.BOLD); Chunk chunkCols = new Chunk(dataTable.Columns[i].ColumnName, ColFont); cellCols.Add(chunkCols); pdfTable.AddCell(cellCols); } //creating table data (actual result) for (int k = 0; k < rows; k++) { for (int j = 0; j < cols; j++) { Cell cellRows = new Cell(); Font RowFont = FontFactory.GetFont(FontFactory.HELVETICA, 6); Chunk chunkRows = new Chunk(dataTable.Rows[k][j].ToString(), RowFont); cellRows.Add(chunkRows); pdfTable.AddCell(cellRows); } } pdfDoc.Add(pdfTable); pdfDoc.Close(); Response.ContentType = "application/octet-stream"; if (val == 1) { Response.AddHeader("Content-Disposition", "attachment; filename=Users.pdf"); } else if (val == 2) { Response.AddHeader("Content-Disposition", "attachment; filename=Customers.pdf"); } else if (val == 3) { Response.AddHeader("Content-Disposition", "attachment; filename=Materials.pdf"); } else { Response.AddHeader("Content-Disposition", "attachment; filename=Reports.pdf"); } Response.Clear(); Response.BinaryWrite(mStream.ToArray()); //Response.Write(mStream.ToString()); HttpContext.Current.ApplicationInstance.CompleteRequest(); Response.End(); }

    Read the article

  • Sending multiline message via sockets without closing the connection

    - by Yasir Arsanukaev
    Hello folks. Currently I have this code of my client-side Haskell application: import Network.Socket import Network.BSD import System.IO hiding (hPutStr, hPutStrLn, hGetLine, hGetContents) import System.IO.UTF8 connectserver :: HostName -- ^ Remote hostname, or localhost -> String -- ^ Port number or name -> IO Handle connectserver hostname port = withSocketsDo $ do -- withSocketsDo is required on Windows -- Look up the hostname and port. Either raises an exception -- or returns a nonempty list. First element in that list -- is supposed to be the best option. addrinfos <- getAddrInfo Nothing (Just hostname) (Just port) let serveraddr = head addrinfos -- Establish a socket for communication sock <- socket (addrFamily serveraddr) Stream defaultProtocol -- Mark the socket for keep-alive handling since it may be idle -- for long periods of time setSocketOption sock KeepAlive 1 -- Connect to server connect sock (addrAddress serveraddr) -- Make a Handle out of it for convenience h <- socketToHandle sock ReadWriteMode -- Were going to set buffering to LineBuffering and then -- explicitly call hFlush after each message, below, so that -- messages get logged immediately hSetBuffering h LineBuffering return h sendid :: Handle -> String -> IO String sendid h id = do hPutStr h id -- Make sure that we send data immediately hFlush h -- Retrieve results hGetLine h The code portions in connectserver are from this chapter of Real World Haskell book where they say: When dealing with TCP data, it's often convenient to convert a socket into a Haskell Handle. We do so here, and explicitly set the buffering – an important point for TCP communication. Next, we set up lazy reading from the socket's Handle. For each incoming line, we pass it to handle. After there is no more data – because the remote end has closed the socket – we output a message about that. Since hGetContents blocks until the server closes the socket on the other side, I used hGetLine instead. It satisfied me before I decided to implement multiline output to client. I wouldn't like the server to close a socket every time it finishes sending multiline text. The only simple idea I have at the moment is to count the number of linefeeds and stop reading lines after two subsequent linefeeds. Do you have any better suggestions? Thanks.

    Read the article

  • Referencing both an old version and new version of the same DLL (VB.Net)

    - by ckittel
    Consider the following situation: WidgetCompany produced a .NET DLL in 2006 called Widget.dll, version 1.0. I consumed this Widget.dll file throughout my VB.Net application. Over time, WidgetCompany has been updating Widget.dll, I never bothered to keep up, continuing to ship version 1.0 of Widget.dll with my software. It's now 2010, my project is now a VB.Net 3.5 application and WidgetCompany has come out with Widget.dll version 3.0. It looks and functions almost identical to Widget.dll version 1.0, using all the same namespaces and type names from before. However, Widget.dll version 3.0 has many run-time breaking changes since 1.0 and I cannot simply cut over to the new version; however, I don't want to continue developing against the 1.0 version and therefore keep digging myself deeper in the hole. What I want to do is do all new development in my project with Widget.dll version 3.0, whilst keeping Widget.dll version 1.0 around until I find time to convert all of my 1.0 consumption to the newer 3.0 code. Now, for starters, I obviously cannot simply reference both Widget.dll (Ver 1.0) and Widget.dll (Ver 3.0) in Visual Studio. Doing so gives me the following message: "A reference to 'Widget.dll' could not be added. A reference to the component 'Widget' already exists in the project." To work around that, I can simply rename version 3.0 Widget.dll to Widget.3.dll. But this is where I'm stuck. Any attempts to reference types found in "the dll" leads to ambiguity and the compiler obviously doesn't have any clue as to what I really want in this or that case. Is there something I can do that gives a DLL a new "root" Namespace or something? For example, if I could say "Widget.dll has a new root namespace of Legacy" then I could update existing code to reference the types found in Legacy.<RootNamespace> namespace while all new code could simply reference types from the <RootNamespace> namespace. Pipe dream or reality? Are there other solutions to situations this (besides "don't get in this situation in the first place")?

    Read the article

  • e.Row.Tag .ToString

    - by prince23
    hi, Child data grid is not showing the values in the page for the child datagrid I am binding with an list <sdk:DataGrid MinHeight="100" x:Name="contacts" Margin="51,21,88,98" RowDetailsVisibilityChanged="contacts_RowDetailsVisibilityChanged" LoadingRowDetails="contacts_LoadingRowDetails" RowDetailsVisibilityMode="VisibleWhenSelected" MouseLeftButtonUp="contacts_MouseLeftButtonUp" MouseLeftButtonDown="contacts_MouseLeftButtonDown"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Binding="{Binding EmployeeID}" Header="ID" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeFName}" Header="Fname" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeLName}" Header="LName" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeMailID}" Header="MailID" /> </sdk:DataGrid.Columns> <sdk:DataGrid.RowDetailsTemplate> <DataTemplate> <sdk:DataGrid x:Name="dgrdRowDetail" Width="200" AutoGenerateColumns="False" HorizontalAlignment="Center" IsReadOnly="True"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Header="CompanyName" Binding="{Binding Company name}"/> <sdk:DataGridTextColumn Header="CompanyName" Binding="{Binding EmpID}"/> </sdk:DataGrid.Columns> </sdk:DataGrid> </DataTemplate> </sdk:DataGrid.RowDetailsTemplate> </sdk:DataGrid> I am having 2 grids "contacts" and "dgrdRowDetail" globally i have defined an variable like this:- DataGrid dgrdRowDetail; in the contacts_RowDetailsVisibilityChanged event I have this code if (e.Row.DataContext != null) { string strEmpID = ((SilverlightApplication1.DBServiceEMP.Employee)((e.DetailsElement).DataContext)).EmployeeID; dgrdRowDetail = (DataGrid)e.DetailsElement.FindName("dgrdRowDetail"); // here i am finding the child datgrid control in contacts datagrid // then in dgrdRowDetail i will be binding this grid with new values if (strEmpID != null) { int EmpID = Convert.ToInt32(strEmpID.ToString()); DBServiceEmp.GetEmployeeIDCompleted += new EventHandler<GetEmployeeIDCompletedEventArgs>(DBServiceEmp_GetEmployeeIDCompleted); DBServiceEmp.GetEmployeeIDAsync(EmpID); } } this is my method void DBServiceEmp_GetEmployeeIDCompleted(object sender, GetEmployeeIDCompletedEventArgs e) { // List<Employee> Employes = new List<Employee>(); List<Employee> rows = new List<Employee>(); for (int i = 0; i < e.Result.Count; i++) { rows.Add(e.Result[i]); } dgrdRowDetail.ItemsSource = rows; // here i am binding the child datagrid with new data source } dgrdRowDetail.ItemsSource = rows// what ever rows i am binding to dgrdRowDetail are not shown in the page if i check the rows i am able to see the value ther. but in the child grid it is not reflecting plz plz help me out i am struck thanks in advance prince

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Segfault (possibly due to casting)

    - by BSchlinker
    I don't normally go to stackoverflow for sigsegv errors, but I have done all I can with my debugger at the moment. The segmentation fault error is thrown following the completion of the function. Any ideas what I'm overlooking? I suspect that it is due to the casting of the sockaddr to the sockaddr_in, but I am unable to find any mistakes there. (Removing that line gets rid of the seg fault -- but I know that may not be the root cause here). // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; // return string string foundIP; // setup the struct for a connection with selected IP if ((rv = getaddrinfo("4.2.2.1", NULL, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // connect the UDP socket to something connect(sockfd, p->ai_addr, p->ai_addrlen); // we need to connect to get the systems local IP // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return foundIP; }

    Read the article

  • Problem wth paging ObjectDataSource

    - by funky
    asp page code: <asp:ObjectDataSource runat="server" ID="odsResults" OnSelecting="odsResults_Selecting" /> <tr><td> <wssawc:SPGridViewPager ID="sgvpPagerTop" runat="server" GridViewId="sgvConversionResults" /> </td></tr> <tr> <td colspan="2" class="ms-vb"> <wssawc:SPGridView runat="server" ID="sgvConversionResults" AutoGenerateColumns="false" RowStyle-CssClass="" AlternatingRowStyle-CssClass="ms-alternating" /> </td> </tr> Class code: public partial class Convert : System.Web.UI.Page { ... private DataTable resultDataSource = new DataTable(); ... protected void Page_Init(object sender, EventArgs e) { ... resultDataSource.Columns.Add("Column1"); resultDataSource.Columns.Add("Column2"); resultDataSource.Columns.Add("Column3"); resultDataSource.Columns.Add("Column4"); ... odsResults.TypeName = GetType().AssemblyQualifiedName; odsResults.SelectMethod = "SelectData"; odsResults.SelectCountMethod = "GetRecordCount"; odsResults.EnablePaging = true; sgvConversionResults.DataSourceID = odsResults.ID; ConversionResultsCreateColumns(); sgvConversionResults.AllowPaging = true; ... } protected void btnBTN_Click(object sender, EventArgs e) { // add rows into resultDataSource } public DataTable SelectData(DataTable ds,int startRowIndex,int maximumRows) { DataTable dt = new DataTable(); dt.Columns.Add("Column1"); dt.Columns.Add("Column2"); dt.Columns.Add("Column3"); dt.Columns.Add("Column4"); for (int i =startRowIndex; i<startRowIndex+10 ;i++) { if (i<ds.Rows.Count) { dt.Rows.Add(ds.Rows[i][0].ToString(), ds.Rows[i][1].ToString(), ds.Rows[i][2].ToString(), ds.Rows[i][3].ToString()); } } return dt; } public int GetRecordCount(DataTable ds) { return ds.Rows.Count; } protected void odsResults_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["ds"] = resultDataSource; } } On click BTN button resultDataSource receive some rows. Page reload and we can see result in sgvConversionResults. First 10 rows. But after click next page in pager we have message "There are no items to show in this view". When I try debug I find that after postBack page (on click next page) input params "ds" is blank, ds.Rows.Count = 0 and etc... As though resultDataSource became empty(( What do I do not correctly?. Sorry my English.

    Read the article

  • Purpose of Explicit Default Constructors

    - by Dennis Zickefoose
    I recently noticed a class in C++0x that calls for an explicit default constructor. However, I'm failing to come up with a scenario in which a default constructor can be called implicitly. It seems like a rather pointless specifier. I thought maybe it would disallow Class c; in favor of Class c = Class(); but that does not appear to be the case. Some relevant quotes from the C++0x FCD, since it is easier for me to navigate [similar text exists in C++03, if not in the same places] 12.3.1.3 [class.conv.ctor] A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or value initialization (8.5). It goes on to provide an example of an explicit default constructor, but it simply mimics the example I provided above. 8.5.6 [decl.init] To default-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); 8.5.7 [decl.init] To value-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9) with a user-provided constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); In both cases, the standard calls for the default constructor to be called. But that is what would happen if the default constructor were non-explicit. For completeness sake: 8.5.11 [decl.init] If no initializer is specified for an object, the object is default-initialized; From what I can tell, this just leaves conversion from no data. Which doesn't make sense. The best I can come up with would be the following: void function(Class c); int main() { function(); //implicitly convert from no parameter to a single parameter } But obviously that isn't the way C++ handles default arguments. What else is there that would make explicit Class(); behave differently from Class();? The specific example that generated this question was std::function [20.8.14.2 func.wrap.func]. It requires several converting constructors, none of which are marked explicit, but the default constructor is.

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Android: Memory leak due to AsyncTask

    - by Manu
    Hello, I'm stuck with a memory leak that I cannot fix. I identified where it occurs, using the MemoryAnalizer but I vainly struggle to get rid of it. Here is the code: public class MyActivity extends Activity implements SurfaceHolder.Callback { ... Camera.PictureCallback mPictureCallbackJpeg = new Camera.PictureCallback() { public void onPictureTaken(byte[] data, Camera c) { try { // log the action Log.e(getClass().getSimpleName(), "PICTURE CALLBACK JPEG: data.length = " + data); // Show the ProgressDialog on this thread pd = ProgressDialog.show(MyActivity.this, "", "Préparation", true, false); // Start a new thread that will manage the capture new ManageCaptureTask().execute(data, c); } catch(Exception e){ AlertDialog.Builder dialog = new AlertDialog.Builder(MyActivity.this); ... dialog.create().show(); } } class ManageCaptureTask extends AsyncTask<Object, Void, Boolean> { protected Boolean doInBackground(Object... args) { Boolean isSuccess = false; // initialize the bitmap before the capture ((myApp) getApplication()).setBitmapX(null); try{ // Check if it is a real device or an emulator TelephonyManager telmgr = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE); String deviceID = telmgr.getDeviceId(); boolean isEmulator = "000000000000000".equalsIgnoreCase(deviceID); // get the bitmap if (isEmulator) { ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeFile(imageFileName)); } else { ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeByteArray((byte[]) args[0], 0, ((byte[])args[0]).length)); } ((myApp) getApplication()).setImageForDB(ImageTools.resizeBmp(((myApp) getApplication()).getBmp())); // convert the bitmap into a grayscale image and display it in the preview ((myApp) getApplication()).setImage(makeGrayScale()); isSuccess = true; } catch (Exception connEx){ errorMessageFromBkgndThread = getString(R.string.errcapture); } return isSuccess; } protected void onPostExecute(Boolean result) { // Pass the result data back to the main activity if (MyActivity.this.pd != null) { MyActivity.this.pd.dismiss(); } if (result){ ((ImageView) findViewById(R.id.apercu)).setImageBitmap(((myApp) getApplication()).getBmp()); ((myApp) getApplication()).setBitmapX(null); } else{ // there was an error ErrAlert(); } } } }; private void ErrAlert(){ // notify the user about the error AlertDialog.Builder dialog = new AlertDialog.Builder(this); ... dialog.create().show(); } } MemoryAnalyzer indicated the memory leak at: ((myApp) getApplication()).setBitmapX(BitmapFactory.decodeByteArray((byte[]) args[0], 0, ((byte[])args[0]).length)); I am grateful for any suggestion, thank you in advance.

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >