Search Results

Search found 14548 results on 582 pages for 'const reference'.

Page 539/582 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • WCF Service with callbacks coming from background thread?

    - by Mark Struzinski
    Here is my situation. I have written a WCF service which calls into one of our vendor's code bases to perform operations, such as Login, Logout, etc. A requirement of this operation is that we have a background thread to receive events as a result of that action. For example, the Login action is sent on the main thread. Then, several events are received back from the vendor service as a result of the login. There can be 1, 2, or several events received. The background thread, which runs on a timer, receives these events and fires an event in the wcf service to notify that a new event has arrived. I have implemented the WCF service in Duplex mode, and planned to use callbacks to notify the UI that events have arrived. Here is my question: How do I send new events from the background thread to the thread which is executing the service? Right now, when I call OperationContext.Current.GetCallbackChannel<IMyCallback>(), the OperationContext is null. Is there a standard pattern to get around this? I am using PerSession as my SessionMode on the ServiceContract. UPDATE: I thought I'd make my exact scenario clearer by demonstrating how I'm receiving events from the vendor code. My library receives each event, determines what the event is, and fires off an event for that particular occurrence. I have another project which is a class library specifically for connecting to the vendor service. I'll post the entire implementation of the service to give a clearer picture: [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerSession )] public class VendorServer:IVendorServer { private IVendorService _vendorService; // This is the reference to my class library public VendorServer() { _vendorServer = new VendorServer(); _vendorServer.AgentManager.AgentLoggedIn += AgentManager_AgentLoggedIn; // This is the eventhandler for the event which arrives from a background thread } public void Login(string userName, string password, string stationId) { _vendorService.Login(userName, password, stationId); // This is a direct call from the main thread to the vendor service to log in } private void AgentManager_AgentLoggedIn(object sender, EventArgs e) { var agentEvent = new AgentEvent { AgentEventType = AgentEventType.Login, EventArgs = e }; } } The AgentEvent object contains the callback as one of its properties, and I was thinking I'd perform the callback like this: agentEvent.Callback = OperationContext.Current.GetCallbackChannel<ICallback>(); How would I pass the OperationContext.Current instance from the main thread into the background thread?

    Read the article

  • Picking good first estimates for Goldschmidt division

    - by Mads Elvheim
    I'm calculating fixedpoint reciprocals in Q22.10 with Goldschmidt division for use in my software rasterizer on ARM. This is done by just setting the nominator to 1, i.e the nominator becomes the scalar on the first iteration. To be honest, I'm kind of following the wikipedia algorithm blindly here. The article says that if the denominator is scaled in the half-open range (0.5, 1.0], a good first estimate can be based on the denominator alone: Let F be the estimated scalar and D be the denominator, then F = 2 - D. But when doing this, I lose a lot of precision. Say if I want to find the reciprocal of 512.00002f. In order to scale the number down, I lose 10 bits of precision in the fraction part, which is shifted out. So, my questions are: Is there a way to pick a better estimate which does not require normalization? Also, is it possible to pre-calculate the first estimates so the series converges faster? Right now, it converges after the 4th iteration on average. On ARM this is about ~50 cycles worst case, and that's not taking emulation of clz/bsr into account, nor memory lookups. Here is my testcase. Note: The software implementation of clz on line 13 is from my post here. You can replace it with an intrinsic if you want. #include <stdio.h> #include <stdint.h> const unsigned int BASE = 22ULL; static unsigned int divfp(unsigned int val, int* iter) { /* Nominator, denominator, estimate scalar and previous denominator */ unsigned long long N,D,F, DPREV; int bitpos; *iter = 1; D = val; /* Get the shift amount + is right-shift, - is left-shift. */ bitpos = 31 - clz(val) - BASE; /* Normalize into the half-range (0.5, 1.0] */ if(0 < bitpos) D >>= bitpos; else D <<= (-bitpos); /* (FNi / FDi) == (FN(i+1) / FD(i+1)) */ /* F = 2 - D */ F = (2ULL<<BASE) - D; /* N = F for the first iteration, because the nominator is simply 1. So don't waste a 64-bit UMULL on a multiply with 1 */ N = F; D = ((unsigned long long)D*F)>>BASE; while(1){ DPREV = D; F = (2<<(BASE)) - D; D = ((unsigned long long)D*F)>>BASE; /* Bail when we get the same value for two denominators in a row. This means that the error is too small to make any further progress. */ if(D == DPREV) break; N = ((unsigned long long)N*F)>>BASE; *iter = *iter + 1; } if(0 < bitpos) N >>= bitpos; else N <<= (-bitpos); return N; } int main(int argc, char* argv[]) { double fv, fa; int iter; unsigned int D, result; sscanf(argv[1], "%lf", &fv); D = fv*(double)(1<<BASE); result = divfp(D, &iter); fa = (double)result / (double)(1UL << BASE); printf("Value: %8.8lf 1/value: %8.8lf FP value: 0x%.8X\n", fv, fa, result); printf("iteration: %d\n",iter); return 0; }

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • Gradual memory leak in loop over contents of QTMovie

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases. Instruments is not detecting any memory leaks per se, but the object trace shows certain General Data blocks to be increasing in size. [Edited out reference to slowdown as it doesn't seem to be as much of a problem as I thought.] Edit: let's knock out some parts of the code inside the loop & see what we find out... Test 1 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } Here we simply loop over the whole movie, creating the filename and logging it. Memory characteristics: remains at 15MB usage for the duration. Test 2 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; [sourceMovie stepForward]; [arp release]; } Here we add back in the creation of the NSImage from the current frame. Memory characteristics: gradually increasing memory usage. RSIZE is at 60MB by frame 200; 75MB by f300. Test 3 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; NSData *imageData = [image TIFFRepresentation]; [sourceMovie stepForward]; [arp release]; } We've added back in the creation of an NSData object from the NSImage. Memory characteristics: Memory usage is again increasing: 62MB at f200; 75MB at f300. In other words, largely identical. It looks like a memory leak in the underlying system QTMovie uses to do currentFrameImage, to me.

    Read the article

  • How to handle ordering of @Rule's when they are dependant on eachother

    - by Lennart Schedin
    I use embedded servers that run inside Junit test cases. Sometimes these servers require a working directory (for example the Apache Directory server). The new @Rule in Junit 4.7 can handle these cases. The TemporaryFolder-Rule can create a temporary directory. A custom ExternalResource-Rule can be created for server. But how do I handle if I want to pass the result from one rule into another: import static org.junit.Assert.assertEquals; import java.io.*; import org.junit.*; import org.junit.rules.*; public class FolderRuleOrderingTest { @Rule public TemporaryFolder folder = new TemporaryFolder(); @Rule public MyNumberServer server = new MyNumberServer(folder); @Test public void testMyNumberServer() throws IOException { server.storeNumber(10); assertEquals(10, server.getNumber()); } /** Simple server that can store one number */ private static class MyNumberServer extends ExternalResource { private TemporaryFolder folder; /** The actual datafile where the number are stored */ private File dataFile; public MyNumberServer(TemporaryFolder folder) { this.folder = folder; } @Override protected void before() throws Throwable { if (folder.getRoot() == null) { throw new RuntimeException("TemporaryFolder not properly initialized"); } //All server data are stored to a working folder File workingFolder = folder.newFolder("my-work-folder"); dataFile = new File(workingFolder, "datafile"); } public void storeNumber(int number) throws IOException { dataFile.createNewFile(); DataOutputStream out = new DataOutputStream(new FileOutputStream(dataFile)); out.writeInt(number); } public int getNumber() throws IOException { DataInputStream in = new DataInputStream(new FileInputStream(dataFile)); return in.readInt(); } } } In this code the folder is sent as a parameter into the server so that the server can create a working directory to store data. However this does not work because Junit processes the rules in reverse order as they are defined in the file. The TemporaryFolder Rule will not be executed before the server Rule. Thus the root-folder in TempraryFolder will be null, resulting that any files are created relative to the current working directory. If I reverse the order of the attributes in my class I get a compile error because I cannot reference a variable before it is defined. I'm using Junit 4.8.1 (because the ordering of rules was fixed a bit from the 4.7 release)

    Read the article

  • Problems with blocking reads using libudev on Linux

    - by Steve Hawkins
    We are using the following routine (on Linux, with libudev) to read data from a PIC microcontroller configured as a USB HID device. The data is sent only when a button connected to the PIC microcontroller is pressed or released. The routine is missing messages from the PIC controller, and I suspect that this is because the call to poll below is not behaving as it should. The call to poll will reliably block for 1 second util the first message is read. As soon as the first message is read, the call to poll returns immediately instead of blocking for 1 second (1000 milliseconds) like it should. I have worked around this problem by closing and re-opening the device after each read. This makes poll behave correctly, but I think that closing and re-opening the device may be the reason for the lost messages. bool PicIo::Receive (unsigned char* picData, const size_t picDataSize) { static hiddev_report_info hidReportInfo; static hiddev_usage_ref_multi hidUsageRef; if (-1 == PicDeviceDescriptor()) { return false; } // Determine whether or not there is data available to be read pollfd pollFd; pollFd.fd = PicDeviceDescriptor(); pollFd.events = POLLIN; int dataPending = poll (&pollFd, 1, 1000); if (dataPending <= 0) { return false; } // Initialize the HID Report structure for an input report hidReportInfo.report_type = HID_REPORT_TYPE_INPUT; hidReportInfo.report_id = 0; hidReportInfo.num_fields = 64; if (-1 == ioctl(PicDeviceDescriptor(), HIDIOCGREPORT, &hidReportInfo)) { return false; } // Initizlize the HID Usage Reference for an Input report hidUsageRef.uref.report_type = HID_REPORT_TYPE_INPUT; hidUsageRef.uref.report_id = 0; hidUsageRef.uref.field_index = 0; hidUsageRef.uref.usage_index = 0; hidUsageRef.num_values = 64; if (-1 == ioctl(PicDeviceDescriptor(), HIDIOCGUSAGES, &hidUsageRef)) { return false; } // Transfer bytes from the usage report into the return value. for (size_t idx=0; (idx < 64) && (idx < picDataSize); ++idx) { picData[idx] = hidUsageRef.values[idx]; } return true; } The function PicDeviceDescriptor() does checking on the device to make sure that it is present. Here are the pertinent details of the PicDeviceDescriptor function, showing how the device is begin opened. int PicIo::PicDeviceDescriptor(int command) { struct stat statInfo; static int picDeviceDescriptor = -1; string picDevicePath = "/dev/usb/hiddev0"; if ((-1 != picDeviceDescriptor) && (CLOSE == command)) { close (picDeviceDescriptor); picDeviceDescriptor = -1; } else if ((-1 != picDeviceDescriptor) && (-1 == fstat(picDeviceDescriptor, &statInfo))) { // Handle the case where the PIC device had previously been detected, and // is now disconnected. close (picDeviceDescriptor); picDeviceDescriptor = -1; } else if ((-1 == picDeviceDescriptor) && (m_picDevice.IsConnected())) { // Create the PIC device descriptor if the PIC device is present (i.e. its // device node is present) and if the descriptor does not already exist picDeviceDescriptor = open (picDevicePath.c_str(), O_RDONLY); } return picDeviceDescriptor; } I'm sure that I'm doing something wrong, but I've Googled the issue and cannot seem to find any relevant answers. Any help would be very much appreciated -- Thx.

    Read the article

  • Why does my C++ LinkedList method print out the last word more than once?

    - by Anthony Glyadchenko
    When I call the cmremoveNode method in my LinkedList from outside code, I get an EXC_BAD_ACCESS. FIXED: But now the last word using the following test code gets repeated twice: #include <iostream> #include "LinkedList.h" using namespace std; int main (int argc, char * const argv[]) { ctlinkList linkMe; linkMe.cminsertNode("The"); linkMe.cminsertNode("Cat"); linkMe.cminsertNode("Dog"); linkMe.cminsertNode("Cow"); linkMe.cminsertNode("Ran"); linkMe.cminsertNode("Pig"); linkMe.cminsertNode("Away"); linkMe.cmlistList(); cout << endl; linkMe.cmremoveNode("The"); linkMe.cmremoveNode("Cow"); linkMe.cmremoveNode("Away"); linkMe.cmlistList(); return 0; } LinkedList code: /* * LinkedList.h * Lab 6 * * Created by Anthony Glyadchenko on 3/22/10. * Copyright 2010 __MyCompanyName__. All rights reserved. * */ #include <stdio.h> #include <iostream> #include <fstream> #include <iomanip> using namespace std; class ctNode { friend class ctlinkList ; // friend class allowed to access private data private: string sfileWord ; // used to allocate and store input word int iwordCnt ; // number of word occurrances ctNode* ctpnext ; // point of Type Node, points to next link list element }; class ctlinkList { private: ctNode* ctphead ; // initialized by constructor public: ctlinkList () { ctphead = NULL ; } ctNode* gethead () { return ctphead ; } string cminsertNode (string svalue) { ctNode* ctptmpHead = ctphead ; if ( ctphead == NULL ) { // allocate new and set head ctptmpHead = ctphead = new ctNode ; ctphead -> ctpnext = NULL ; ctphead -> sfileWord = svalue ; } else { //find last ctnode do { if ( ctptmpHead -> ctpnext != NULL ) ctptmpHead = ctptmpHead -> ctpnext ; } while ( ctptmpHead -> ctpnext != NULL ) ; // fall thru found last node ctptmpHead -> ctpnext = new ctNode ; ctptmpHead = ctptmpHead -> ctpnext ; ctptmpHead -> ctpnext = NULL; ctptmpHead -> sfileWord = svalue ; } return ctptmpHead -> sfileWord ; } string cmreturnNode (string svalue) { return NULL; } string cmremoveNode (string svalue) { int counter = 0; ctNode *tmpHead = ctphead; if (ctphead == NULL) return NULL; while (tmpHead->sfileWord != svalue && tmpHead->ctpnext != NULL){ tmpHead = tmpHead->ctpnext; counter++; } do{ tmpHead->sfileWord = tmpHead->ctpnext->sfileWord; tmpHead = tmpHead->ctpnext; } while (tmpHead->ctpnext != NULL); return tmpHead->sfileWord; } string cmlistList () { string tempList; ctNode *tmpHead = ctphead; if (ctphead == NULL){ return NULL; } else{ while (tmpHead != NULL){ cout << tmpHead->sfileWord << " "; tempList += tmpHead->sfileWord; tmpHead = tmpHead -> ctpnext; } } return tempList; } }; Why is this happening?

    Read the article

  • My mental block - struggling to learn Objective C

    - by iqessar
    Hello people, this would be my first question after signing up! Anyway heres my question, I did Java at university and I was always told I am a good programmer. However I never pursued it as a career - I went into support and management instead. Im pretty much bored with my job, I have therefore started to learn Objective C so that I can develop apps for the iphone. I am currently watching several different Videos / Books. My problem is that when I go through the Apple documentation, although I understand most of it, sometimes I stumble. I believe that because you/we have the Apple documentation (i.e. Framework references) , everything should be clear, and therefore you should have no need to refer to a book or video (in order to learn how to use a particular class). But I alway do refer to a book and video and subsequently feel guilty as I believe the framework reference should be enough. (I therefore feel I am not up to being a programmer) I also believe that you shouldn't need example code in order to learn how to use a particular class because Apple provides documentation for each class, but AGAIN I find my self googling example code and I find my answer like that - again I feel guilty for doing this. Am I right in saying that Apple documentation is simply not clear? and that its ok to refer to a video/book or google? or forums for that matter? I have proffesional programmers who tell me that I am worrying too much and that I should get on with it and use all the resources that I have. I just cant seem to get round this mental block that I have in my head. When I start a programming project I am able to use the excellent search skills that I have to find the code I need, copy and paste it (yes I do understand it) BUT then I feel guilty telling myself that why didn't you think up the code yourself???? Therefore your not a real programmer, your just good at googling. Currently I am going through 20+ books so that I can learn most of the frameworks, syntax etc to develop iphone apps. I believe if I do this, then when I think of a project I can make it quickly. Should I read a few books, like 2-3 and then just start a project /app , and if I get stuck just google it and get the code I need? Can anybody please answer my questions?

    Read the article

  • Null Exception RelativeLayout

    - by theblixguy
    I am trying to remove objects from my relative layout and replace the background with another image but I get a java.lang.NullPointerException on this line: RelativeLayout ths = (RelativeLayout)findViewById(R.layout.activity_main); Below is my code: package com.ssrij.qrmag; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.view.View; import android.view.animation.Animation; import android.view.animation.TranslateAnimation; import android.view.animation.Animation.AnimationListener; import android.widget.Button; import android.widget.RelativeLayout; import android.widget.TextView; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Initialize animations Animation a = new TranslateAnimation(1000,0,0,0); Animation a1 = new TranslateAnimation(1000,0,0,0); Animation a2 = new TranslateAnimation(1000,0,0,0); Animation a3 = new TranslateAnimation(1000,0,0,0); // Set animation durations (ms) a.setDuration(1200); a1.setDuration(1400); a2.setDuration(1600); a3.setDuration(1800); // Get a reference to the objects we want to apply the animation to final TextView v = (TextView)findViewById(R.id.textView1); final TextView v1 = (TextView)findViewById(R.id.textView2); final TextView v2 = (TextView)findViewById(R.id.TextView3); final Button v3 = (Button)findViewById(R.id.tap_scan); // Clear existing animations, just in case... v.clearAnimation(); v1.clearAnimation(); v2.clearAnimation(); v3.clearAnimation(); // Start animating v.startAnimation(a); v1.startAnimation(a1); v2.startAnimation(a2); v3.startAnimation(a3); a1.setAnimationListener(new AnimationListener() { @Override public void onAnimationStart(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationRepeat(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationEnd(Animation animation) { v.setVisibility(View.INVISIBLE); v1.setVisibility(View.INVISIBLE); v2.setVisibility(View.INVISIBLE); v3.setVisibility(View.INVISIBLE); RelativeLayout ths = (RelativeLayout)findViewById(R.layout.activity_main); ths.setBackgroundResource(R.drawable.blurbg); } }); } public void ScanQr(View v) { // Open the QR Scan page Intent a = new Intent(MainActivity.this, ScanActivity.class); startActivity(a); } } Is there anything that I am doing wrong?

    Read the article

  • Cannot decode complete cipher list in .NET SslStream handshake.

    - by karmasponge
    While attempting to move from a 'C' based SSL implementation to C# using the .NET SslStream and we have run into what look like cipher compatibility issues with the .NET SslStream and a AS400 machine we are trying to connect to (which worked previously). When we call SslStream.AuthenticateAsClient it is sending the following: 16 03 00 00 37 01 00 00 33 03 00 4d 2c 00 ee 99 4e 0c 5d 83 14 77 78 5c 0f d3 8f 8b d5 e6 b8 cd 61 0f 29 08 ab 75 03 f7 fa 7d 70 00 00 0c 00 05 00 0a 00 13 00 04 00 02 00 ff 01 00 Which decodes as (based on http://www.mozilla.org/projects/security/pki/nss/ssl/draft302.txt) [16] Record Type [03 00] SSL Version [00 37] Body length [01] SSL3_MT_CLIENT_HELLO [00 00 33] Length (51 bytes) [03 00] Version number = 768 [4d 2c 00 ee] 4 Bytes unix time [… ] 28 Bytes random number [00] Session number [00 0c] 12 bytes (2 * 6 Cyphers)? [00 05, 00 0a, 00 13, 00 04, 00 02, 00 ff] - [RC4, PBE-MD5-DES, RSA, MD5, PKCS, ???] [01 00] Null compression method The as400 server responds back with: 15 03 00 00 02 02 28 [15] SSL3_RT_ALERT [03 00] SSL Version [00 02] Body Length (2 Bytes) [02 28] 2 = SSL3_RT_FATAL, 40 = SSL3_AD_HANDSHAKE_FAILURE I'm specifically looking to decode the '00 FF' at the end of the cyphers. Have I decoded it correctly? What does, if anything, '00 FF' decode too? I am using the following code to test/reproduce: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Sockets; using System.Net.Security; using System.Security.Authentication; using System.IO; using System.Diagnostics; using System.Security.Cryptography.X509Certificates; namespace TestSslStreamApp { class DebugStream : Stream { private Stream AggregatedStream { get; set; } public DebugStream(Stream stream) { AggregatedStream = stream; } public override bool CanRead { get { return AggregatedStream.CanRead; } } public override bool CanSeek { get { return AggregatedStream.CanSeek; } } public override bool CanWrite { get { return AggregatedStream.CanWrite; } } public override void Flush() { AggregatedStream.Flush(); } public override long Length { get { return AggregatedStream.Length; } } public override long Position { get { return AggregatedStream.Position; } set { AggregatedStream.Position = value; } } public override int Read(byte[] buffer, int offset, int count) { int bytesRead = AggregatedStream.Read(buffer, offset, count); return bytesRead; } public override long Seek(long offset, SeekOrigin origin) { return AggregatedStream.Seek(offset, origin); } public override void SetLength(long value) { AggregatedStream.SetLength(value); } public override void Write(byte[] buffer, int offset, int count) { AggregatedStream.Write(buffer, offset, count); } } class Program { static void Main(string[] args) { const string HostName = "as400"; TcpClient tcpClient = new TcpClient(HostName, 992); SslStream sslStream = new SslStream(new DebugStream(tcpClient.GetStream()), false, null, null, EncryptionPolicy.AllowNoEncryption); sslStream.AuthenticateAsClient(HostName, null, SslProtocols.Ssl3, false); } } }

    Read the article

  • Can't access font resource in Silverlight class library

    - by Matt
    I have a reasonably large Silveright 3.0 project on the go, and I'm having issues accessing a couple of custom font resources from within one of the assemblies. I've got a working test solution where I have added a custom font as a resource, and can access it fine from XAML using: <TextBlock Text="Test" FontFamily="FontName.ttf#Font Name" /> The test solution consists of the TestProject.Application and the TestProject.Application.Web projects, with all the fun and games obviously in the TestProject.Application project However, when I try this in my main solution, the fonts refuse to show in the correct type face (instead showing in the default font). There's no difference in the way the font has been added to project between the test solution and the main solution, and the XAML is identical. However, there is a solution layout difference. In the main solution, as well as having a MainApp.Application and MainApp.Application.Web project, I also have a MainApp.Application.ViewModel project and a MainApp.Application.Views project, and the problem piece of XAML is the in the MainApp.Application.Views project (not the .Application project like the test solution). I've tried putting the font into either the .Application or .Application.Views project, tried changing the Build Action to Content, Embedded Resource etc, all to no avail. So, is there an issue accessing font resources from a child assembly that I don't know about, or has anyone successfully done this? My long term need will be to have the valid custom fonts being stored as resources in a separate .Application.FontLibrary assembly that will be on-demand downloaded and cached, and the XAML controls in the .Application.Views project will need to reference this FontLibrary assembly to get the valid fonts. I've also tried xcreating this separate font library assembly, and I can't seem to get the fonts from the second assembly. As some additional information, I've also tried the following font referencing approaches: <TextBlock Text="Test" FontFamily="/FontName.ttf#Font Name" /> <TextBlock Text="Test" FontFamily="pack:application,,,/FontName.ttf#Font Name" /> <TextBlock Text="Test" FontFamily="pack:application,,,/MainApp.Application.Views;/FontName.ttf#Font Name" /> <TextBlock Text="Test" FontFamily="pack:application,,,/MainApp.Application.Views;component/FontName.ttf#Font Name" /> And a few similar variants with different assembly references/sub directories/random semi colons. And so far nothing works... anyone struck this (and preferably solved it)?

    Read the article

  • Different cursor formats in IOFrameBufferShared

    - by Thomi
    Hi, I'm reading the moust cursor pixmap data from the StdFBShmem_t structure, as defined in the IOFrameBufferShared API. Everything works fine, 90% of the time. However, I have noticed that some applications on the mac set a cursor in a different format. According to the documentation for the data structures, the cursor pixmap format should always be in the same format as the frame buffer. My frame buffer is 32BPP. I expect the pixmap data to be in the format 0xAARRGGBB, which is it. However, in some cases, I'm reading data that looks like a mask. Specifically, the pixel will either be 0x00FFFFFF or `0x00000000. This looks to me to be a mask for separate pixel data stored somewhere else. As far as I can tell, the only application that uses this cursor pixel format is Qt Creator, but I need to work with all applications, so I'd like to sort this out. The code I'm using to read the cursor pixmap data is: NSAutoreleasePool *autoReleasePool = [[NSAutoreleasePool alloc] init]; NSPoint mouseLocation = [NSEvent mouseLocation]; NSArray *allScreens = [NSScreen screens]; NSEnumerator *screensEnum = [allScreens objectEnumerator]; NSScreen *screen; NSDictionary *screenDesc = nil; while ((screen = [screensEnum nextObject])) { NSRect screenFrame = [screen frame]; screenDesc = [screen deviceDescription]; if (NSMouseInRect(mouseLocation, screenFrame, NO)) break; } if (screen) { kern_return_t err; CGDirectDisplayID displayID = (CGDirectDisplayID) [[screenDesc objectForKey:@"NSScreenNumber"] pointerValue]; task_port_t taskPort = mach_task_self(); io_service_t displayServicePort = CGDisplayIOServicePort(displayID); io_connect_t displayConnection =0; err = IOFramebufferOpen(displayServicePort, taskPort, kIOFBSharedConnectType, &displayConnection); if (KERN_SUCCESS == err) { union { vm_address_t vm_ptr; StdFBShmem_t *fbshmem; } cursorInfo; vm_size_t size; err = IOConnectMapMemory(displayConnection, kIOFBCursorMemory, taskPort, &cursorInfo.vm_ptr, &size, kIOMapAnywhere | kIOMapDefaultCache | kIOMapReadOnly); if (KERN_SUCCESS == err) { // for some reason, cursor data is not always in the same format as the frame buffer. For this reason, we need // some way to detect which structure we should be reading. QByteArray pixData((const char*)cursorInfo.fbshmem->cursor.rgb24.image[currentFrame], m_mouseInfo.currentSize.width() * m_mouseInfo.currentSize.height() * 4); IOConnectUnmapMemory(displayConnection, kIOFBCursorMemory, taskPort, cursorInfo.vm_ptr); } // IOConnectMapMemory else qDebug() << "IOConnectMapMemory Failed:" << err; IOServiceClose(displayConnection); } // IOServiceOpen else qDebug() << "IOFramebufferOpen Failed:" << err; }// if screen [autoReleasePool release]; My question is: How can I detect if the cursor is a different format from the framebuffer? Where can I read the actual pixel data? the bm18Cursor structure contains a mask section, but it's not in the right place for me to be reading it using the code above. Cheers,

    Read the article

  • Regarding Sqlite Datbase connectivity in iphone

    - by Prash.......
    hi.. I am developing an application in iphone in which i have to do the database connectivity using sqlite. I have made a "clsDBManage" class which contains 4 functions open_database, close_database, is_database_open, getdatabase_connection, which are globally defined ,i have a screen called "user details" in which i am filling the user details such as name , mobileno , email-id , password, i want that when user feeds the info it should get connected with database and store the details entered and should authenticate the user the next time he logs in. (Just as we have yahoo login,gmail login screen) clsDBManage.h +(int) openDBConnection; +(int) closeDBConnection; +(BOOL) IsDatabaseOpen; +(sqlite3 *)getDBConnection; clsDBManage.m import "clsDBManage.h" import "Types.h" sqlite3 *DBConnection=nil; @implementation clsDBManage +(int) openDBConnection { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *path = [documentsDirectory stringByAppendingPathComponent:@"iphone.sqlite"]; //Before if ([self IsDatabaseOpen] == YES) [self closeDBConnection]; // Open the database. The database was prepared outside the application. if (sqlite3_open([path UTF8String], &DBConnection ) == SQLITE_OK) { ifdef _DEBUG NSLog(@"Database Successfully Opened :)"); endif return CON_RET_SUCCESSFUL; } else { ifdef _DEBUG NSLog(@"Error in opening database :("); endif return CON_RET_ERROR; } } +(BOOL) IsDatabaseOpen { if(DBConnection != nil) { //add if condition to check database is in open state or close state return YES; } else { return NO; } } +(int) closeDBConnection { @try { if(DBConnection != nil) { sqlite3_close(DBConnection); DBConnection = nil; return CON_RET_SUCCESSFUL; } else { return CON_RET_ERROR; } } @catch (NSException * e) { NSLog([e reason]); } } +(sqlite3 *)getDBConnection { if ([self openDBConnection] == 1) return DBConnection; else return nil; } @end //classDBManage file is my handler class where i have written code to open database //my addprofile functions -(NSInteger)addProfile:(NSString *)mobileno:(NSString *)country:(NSString *)username:(NSString *)screenname:(NSString *)emailid:(NSString *)password:(NSString *)retypepassword { @try { sqlite3 *db =[clsDBManage getDBConnection]; if (db != nil) { sqlite3_stmt *statement =nil; const char *sql = "insert into SPCaccountdetails(MobileNo , Country , UserName , ScreenName, EmailId, Password, RetypePawssword) Values(?,?,?,?,?,?,?)"; if(sqlite3_prepare_v2(db, sql, -1, &statement, NULL) != SQLITE_OK) { //NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(db)); } sqlite3_bind_text(statement, 1, [mobileno UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 2, [country UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 3, [username UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 4, [screenname UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 5, [emailid UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 6, [password UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 7, [retypepassword UTF8String], -1, SQLITE_TRANSIENT); if(SQLITE_DONE != sqlite3_step(statement)) { NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(db)); } sqlite3_finalize(statement); } } @catch(NSException *e) { //[clsMessageBox ShowMessageOK:@CON_MESSAGE_TITLE_MESSAGE_USER_PROFILE :[e reason]]; } return CON_RET_SUCCESSFUL; } //button click event where i am calling addprofile function;_ [self addProfile:txtMobile :txtCountry :txtName :txtScreenname :txtemailid :txtpassword :txtretypepassword];

    Read the article

  • Virtual member call in a constructor when assigning value to property

    - by comecme
    I have an Abstract class and a Derived class. The abstract class defines an abstract property named Message. In the derived class, the property is implemented by overriding the abstract property. The constructor of the derived class takes a string argument and assigns it to its Message property. In Resharper, this assignment leads to a warning "Virtual member call in constructor". The AbstractClass has this definition: public abstract class AbstractClass { public abstract string Message { get; set; } protected AbstractClass() { } public abstract void PrintMessage(); } And the DerivedClass is as follows: using System; public class DerivedClass : AbstractClass { private string _message; public override string Message { get { return _message; } set { _message = value; } } public DerivedClass(string message) { Message = message; // Warning: Virtual member call in a constructor } public DerivedClass() : this("Default DerivedClass message") {} public override void PrintMessage() { Console.WriteLine("DerivedClass PrintMessage(): " + Message); } } I did find some other questions about this warning, but in those situations there is an actual call to a method. For instance, in this question, the answer by Matt Howels contains some sample code. I'll repeat it here for easy reference. class Parent { public Parent() { DoSomething(); } protected virtual void DoSomething() {}; } class Child : Parent { private string foo; public Child() { foo = "HELLO"; } protected override void DoSomething() { Console.WriteLine(foo.ToLower()); } } Matt doesn't describe on what error the warning would appear, but I'm assuming it will be on the call to DoSomething in the Parent constructor. In this example, I understand what is meant by a virtual member being called. The member call occurs in the base class, in which only a virtual method exists. In my situation however, I don't see why assigning a value to Message would be calling a virtual member. Both the call to and the implementation of the Message property are defined in the derived class. Although I can get rid of the error by making my Derived Class sealed, I would like to understand why this situation is resulting in the warning.

    Read the article

  • Continuous Form, how to add/update records with external connection

    - by Mohgeroth
    EDIT After some more research I found that I cannot use a continuous form with an unbound form since it can only reference a single record at a time. Given that I've altered my question... I have a sample form that pulls out data to enter into a table as an intermediary. Initially the form is unbound and I open connections to two main recordsets. I set the listbox's recordset equal to one of them and the forms recordset equal to the other. The problem is that I cannot add records or update existing ones. Attempting to key into the fields does nothing almost as if the field was locked (Which it is not). Settings of the recordsets are OpenKeyset and LockPessimistic. Tables are not linked, they come from an outside access database seperate from this project and must remain that way. I am using an adodb connection to get the data. Could the separation of the data from the project be causing this? Sample Code from the Form Option Compare Database Option Explicit Private conn As CRobbers_Connections Private exception As CError_Trapping Private mClient_Translations As ADODB.Recordset Private mUnmatched_Clients As ADODB.Recordset Private mExcluded_Clients As ADODB.Recordset //Construction Private Sub Form_Open(Cancel As Integer) Set conn = New CRobbers_Connections Set exception = New CError_Trapping Set mClient_Translations = New ADODB.Recordset Set mUnmatched_Clients = New ADODB.Recordset Set mExcluded_Clients = New ADODB.Recordset mClient_Translations.Open "SELECT * FROM Client_Translation" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mUnmatched_Clients.Open "SELECT DISTINCT(a.Client) as Client" _ & " FROM Master_Projections a " _ & " WHERE Client NOT IN ( " _ & " SELECT DISTINCT ClientID " _ & " FROM Client_Translation);" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mExcluded_Clients.Open "SELECT * FROM Clients_Excluded" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic End Sub //Add new record to the client translations Private Sub cmdAddNew_Click() If lstUnconfirmed <> "" Then AddRecord End If End Sub Private Function AddRecord() With mClient_Translations .AddNew .Fields("ClientID") = Me.lstUnconfirmed .Fields("ClientAbbr") = Me.txtTmpShort .Fields("ClientName") = Me.txtTmpLong .Update End With UpdateRecords End Function Private Function UpdateRecords() Me.lstUnconfirmed.Requery End Function //Load events (After construction) Private Sub Form_Load() Set lstUnconfirmed.Recordset = mUnmatched_Clients //Link recordset into listbox Set Me.Recordset = mClient_Translations End Sub //Destruction method Private Sub Form_Close() Set conn = Nothing Set exception = Nothing Set lstUnconfirmed.Recordset = Nothing Set Me.Recordset = Nothing Set mUnmatched_Clients = Nothing Set mExcluded_Clients = Nothing Set mClient_Translations = Nothing End Sub

    Read the article

  • Why is my namespace not recognized in Visual Studio / xaml

    - by msfanboy
    Hello, these are my 2 classes a Attachable Property SelectedItems: code is from here: http://stackoverflow.com/questions/1297643/sync-selecteditems-in-a-muliselect-listbox-with-a-collection-in-viewmodel The namespace TBM.Helper is for sure proper as it works for other classes too. The namespace reference is also in the xaml file: xmlns:Helper="clr_namespace:TBM.Helper" But <ListBox Helper:SelectedItems.Items="{Binding SelectedItems}" ... does not work because = The property 'SelectedItems.Items' does not exist in XML namespace 'clr_namespace:TBM.Helper'. The attachable property 'Items' was not found in type 'SelectedItems What do I have to change ? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Controls; using System.Collections; using System.Windows; namespace TBM.Helper { public static class SelectedItems : DependencyObject { private static readonly DependencyProperty SelectedItemsBehaviorProperty = DependencyProperty.RegisterAttached( "SelectedItemsBehavior", typeof(SelectedItemsBehavior), typeof(ListBox), null); public static readonly DependencyProperty ItemsProperty = DependencyProperty.RegisterAttached( "Items", typeof(IList), typeof(SelectedItems), new PropertyMetadata(null, ItemsPropertyChanged)); public static void SetItems(ListBox listBox, IList list) { listBox.SetValue(ItemsProperty, list); } public static IList GetItems(ListBox listBox) { return listBox.GetValue(ItemsProperty) as IList; } private static void ItemsPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var target = d as ListBox; if (target != null) { GetOrCreateBehavior(target, e.NewValue as IList); } } private static SelectedItemsBehavior GetOrCreateBehavior(ListBox target, IList list) { var behavior = target.GetValue(SelectedItemsBehaviorProperty) as SelectedItemsBehavior; if (behavior == null) { behavior = new SelectedItemsBehavior(target, list); target.SetValue(SelectedItemsBehaviorProperty, behavior); } return behavior; } } } using System.Windows; using System.Windows.Controls; using System.Collections; namespace TBM.Helper { public class SelectedItemsBehavior { private readonly ListBox _listBox; private readonly IList _boundList; public SelectedItemsBehavior(ListBox listBox, IList boundList) { _boundList = boundList; _listBox = listBox; SetSelectedItems(); _listBox.SelectionChanged += OnSelectionChanged; _listBox.DataContextChanged += OnDataContextChanged; } private void SetSelectedItems() { _listBox.SelectedItems.Clear(); foreach (object item in _boundList) { // References in _boundList might not be the same as in _listBox.Items int i = _listBox.Items.IndexOf(item); if (i >= 0) _listBox.SelectedItems.Add(_listBox.Items[i]); } } private void OnDataContextChanged(object sender, DependencyPropertyChangedEventArgs e) { SetSelectedItems(); } private void OnSelectionChanged(object sender, SelectionChangedEventArgs e) { _boundList.Clear(); foreach (var item in _listBox.SelectedItems) _boundList.Add(item); } } }

    Read the article

  • Best way to run remote VBScript in ASP.net? WMI or PsExec?

    - by envinyater
    I am doing some research to find out the best and most efficient method for this. I will need to execute remote scripts on a number of Window Servers/Computers (while we are building them). I have a web application that is going to automate this task, I currently have my prototype working to use PsExec to execute remote scripts. This requires PsExec to be installed on the system. A colleague suggested I should use WMI for this. I did some research in WMI and I couldn't find what I'm looking for. I want to either upload the script to the server and execute it and read the results, or already have the script on the server and execute it and read the results. I would prefer the first option though! Which is more ideal, PsExec or WMI? For reference, this is my prototype PsExec code. This script is only executing a small script to get the Windows OS and Service Pack Info. Protected Sub windowsScript(ByVal COMPUTERNAME As String) ' Create an array to store VBScript results Dim winVariables(2) As String nameLabel.Text = Name.Text ' Use PsExec to execute remote scripts Dim Proc As New System.Diagnostics.Process ' Run PsExec locally Proc.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") ' Pass in following arguments to PsExec Proc.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C c:\systemInfo.vbs" Proc.StartInfo.RedirectStandardInput = True Proc.StartInfo.RedirectStandardOutput = True Proc.StartInfo.UseShellExecute = False Proc.Start() ' Pause for script to run System.Threading.Thread.Sleep(1500) Proc.Close() System.Threading.Thread.Sleep(2500) 'Allows the system a chance to finish with the process. Dim filePath As String = COMPUTERNAME & "\TTO\somefile.txt" 'Download file created by script on Remote system to local system My.Computer.Network.DownloadFile(filePath, "C:\somefile.txt") System.Threading.Thread.Sleep(1000) ' Pause so file gets downloaded ''Import data from text file into variables textRead("C:\somefile.txt", winVariables) WindowsOSLbl.Text = winVariables(0).ToString() SvcPckLbl.Text = winVariables(1).ToString() System.Threading.Thread.Sleep(1000) ' ''Delete the file on server - we don't need it anymore Dim Proc2 As New System.Diagnostics.Process Proc2.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") Proc2.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C del c:\somefile.txt" Proc2.StartInfo.RedirectStandardInput = True Proc2.StartInfo.RedirectStandardOutput = True Proc2.StartInfo.UseShellExecute = False Proc2.Start() System.Threading.Thread.Sleep(500) Proc2.Close() ' Delete file locally File.Delete("C:\somefile.txt") End Sub

    Read the article

  • Using empty row as default in a ComboBox with Style "DropDownList"?

    - by Pesche Helfer
    Hi board I am trying to write a method, that takes a ComboBox, a DataTable and a TextBox as arguments. The purpose of it is to filter the members displayed in the ComboBox according to the TextBox.Text. The DataTable contains the entire list of possible entries that will then be filtered. For filtering, I create a DataView of the DataTable, add a RowFilter and then bind this View to the ComboBox as DataSource. To prevent the user from typing into the ComboBox, I choose the DropDownStyle DropDownList. That’s working fine so far, except that the user should also be able to choose nothing / empty line. In fact, this should be the default member to be displayed (to prevent choosing a wrong member by accident, if the user clicks through the dialog too fast). I tried to solve this problem by adding a new Row to the view. While this works for some cases, the main issue here is that any DataTable can be passed to the method. If the DataTable contains columns that cannot be null and don’t contain a default value, I suppose I will raise an error by adding an empty row. A possibility would be to create a view that contains only the column that is defined as DisplayMember, and the one that is defined as ValueMember. Alas, this can’t be done with a view in C#. I would like to avoid creating a true copy of the DataTable at all cost, since who knows how big it will get with time. Do you have any suggestions how to get around this problem? Instead of a view, could I create an object containing two members and assign the DisplayMember and the ValueMember to these members? Would the members be passed as reference (what I hope) or would true copied be created (in which case it would not be a solution)? Thank you very much for your help! Best regards public static void ComboFilter(ComboBox cb, DataTable dtSource, TextBox filterTextBox) { cb.DropDownStyle = ComboBoxStyle.DropDownList; string displayMember = cb.DisplayMember; DataView filterView = new DataView(dtSource); filterView.AddNew(); filterView.RowFilter = displayMember + " LIKE '%" + filterTextBox.Text + "%'"; cb.DataSource = filterView; }

    Read the article

  • Android HelloGoogleMaps to OSMdroid (Open Street Maps)

    - by birgit
    I am trying to reproduce a working HelloGoogleMaps app in Open Street Maps - but I have trouble including the itemized overlay in OSMdroid. I have looked at several resources but I cannot figure out how to fix the error on OsmItemizedOverlay - I guess I am constructing OsmItemizedOverlay wrongly or have a mixup with OsmItemizedOverlay and ItemizedOverlay? But everything I tried to change just raised more errors... "Implicit super constructor ItemizedOverlay() is undefined. Must explicitly invoke another constructor" "Cannot make a static reference to the non-static method setMarker(Drawable) from the type OverlayItem" - I hope someone can help me getting the class definition straight? Thanks so much! package com.example.osmdroiddemomap; import java.util.ArrayList; import android.app.AlertDialog; import android.content.Context; import android.graphics.Point; import android.graphics.drawable.Drawable; import org.osmdroid.api.IMapView; import org.osmdroid.views.*; import org.osmdroid.views.overlay.*; import org.osmdroid.views.overlay.OverlayItem.HotspotPlace; public class OsmItemizedOverlay extends ItemizedOverlay<OverlayItem> { Context mContext; private ArrayList<OverlayItem> mOverlays = new ArrayList<OverlayItem>(); //ERRORS are raised by the following 3 lines: public OsmItemizedOverlay(Drawable defaultMarker, Context context) { OverlayItem.setMarker(defaultMarker); OverlayItem.setMarkerHotspot(HotspotPlace.CENTER); mContext = context; } public void addOverlay(OverlayItem overlay) { mOverlays.add(overlay); populate(); } @Override protected OverlayItem createItem(int i) { return mOverlays.get(i); } @Override public int size() { return mOverlays.size(); } protected boolean onTap(int index) { OverlayItem item = mOverlays.get(index); AlertDialog.Builder dialog = new AlertDialog.Builder(mContext); dialog.setTitle(item.getTitle()); dialog.setMessage(item.getSnippet()); dialog.show(); return true; } @Override public boolean onSnapToItem(int arg0, int arg1, Point arg2, IMapView arg3) { // TODO Auto-generated method stub return false; } }

    Read the article

  • Memory Leak with Swing Drag and Drop

    - by tom
    I have a JFrame that accepts top-level drops of files. However after a drop has occurred, references to the frame are held indefinitely inside some Swing internal classes. I believe that disposing of the frame should release all of its resources, so what am I doing wrong? Example import java.awt.datatransfer.DataFlavor; import java.io.File; import java.util.List; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.TransferHandler; public class DnDLeakTester extends JFrame { public static void main(String[] args) { new DnDLeakTester(); //Prevent main from returning or the jvm will exit while (true) { try { Thread.sleep(10000); } catch (InterruptedException e) { } } } public DnDLeakTester() { super("I'm leaky"); add(new JLabel("Drop stuff here")); setTransferHandler(new TransferHandler() { @Override public boolean canImport(final TransferSupport support) { return (support.isDrop() && support .isDataFlavorSupported(DataFlavor.javaFileListFlavor)); } @Override public boolean importData(final TransferSupport support) { if (!canImport(support)) { return false; } try { final List<File> files = (List<File>) support.getTransferable().getTransferData(DataFlavor.javaFileListFlavor); for (final File f : files) { System.out.println(f.getName()); } } catch (Exception e) { e.printStackTrace(); } return true; } }); setDefaultCloseOperation(DISPOSE_ON_CLOSE); pack(); setVisible(true); } } To reproduce, run the code and drop some files on the frame. Close the frame so it's disposed of. To verify the leak I take a heap dump using JConsole and analyse it with the Eclipse Memory Analysis tool. It shows that sun.awt.AppContext is holding a reference to the frame through its hashmap. It looks like TransferSupport is at fault. What am I doing wrong? Should I be asking the DnD support code to clean itself up somehow? I'm running JDK 1.6 update 19.

    Read the article

  • Ajax problem after deleting a div, then creating a new one

    - by Matt Nathanson
    I'm building a custom CMS where you can add and delete clients using ajax and jquery 1.4.2. My problem lies after I delete a div. The ajax is used to complete this and refresh automatically.. But when I go to create a new div (without a hard refresh) it puts it back in the slot of the div I just deleted. How can I get this to completely forget about the div i just deleted and place the new div in the next database table? link for reference: http://staging.sneakattackmedia.com/cms/ //Add New client // function AddNewClient() { dataToLoad = 'addClient=yes'; $.ajax({ type: 'post', url: '/clients/controller.php', datatype: 'html', data: dataToLoad, target: ('#clientssidebar'), async: false, success: function(html){ $(this).click(function() {reInitialize()}); //$('#clientssidebar').html(html); $('div#' + clientID).slideDown(800); $(this).click(function() { ExpandSidebar()});}, error: function() { alert('An error occured! 222');} });}; //Delete Client // function DeleteClient(){ var yes = confirm("Whoa there chief! Do you really want to DELETE this client?"); if (yes == 1) { dataToLoad = 'clientID=' + clientID + '&deleteClient=yes', $.ajax({ type: 'post', url: '/clients/controller.php', datatype: 'html', data: dataToLoad, success: function(html) { alert('Client' + clientID + ' should have been deleted from the database.'); $(this).click(function() {reInitialize()}); $('div#' +clientID).slideUp(800); }, error: function() { alert('error'); }});};}; //Re Initialize // function reInitialize() { $('#addnew').click(function() {AddNewClient()}); $('.deletebutton').click(function() {clientID = $(this).parent().attr('id'); DeleteClient()}) $('.clientblock').click(function() {clientID = $(this).attr('id'); ExpandSidebar()});}; //Document Ready // $(document).ready(function(){ if ($('isCMS')){ editCMS = 1; $('.deletebutton').click(function() {clientID = $(this).parent().attr('id'); DeleteClient()}); $('#addnew').click(function() {AddNewClient()}); $('.clientblock').click(function() {clientID = $(this).attr('id'); ExpandSidebar()}); $('.clientblock').click(function() {if (clickClient ==true) { $(this).css('background-image', 'url(/images/highlightclient.png)'); $(this).css('margin-left' , '30px'); }; $(this).click(function(){ $(this).css('background-image', ''); }); $('.uploadbutton').click(function(){UploadThings()}); }); } else ($('#clientscontainer')) { $('#editbutton').css('display', 'none'); }; }); Please help!!!

    Read the article

  • Are there supposed to be more restrictions on operator->* overloads?

    - by Potatoswatter
    I was perusing section 13.5 after refuting the notion that built-in operators do not participate in overload resolution, and noticed that there is no section on operator->*. It is just a generic binary operator. Its brethren, operator->, operator*, and operator[], are all required to be non-static member functions. This precludes definition of a free function overload to an operator commonly used to obtain a reference from an object. But the uncommon operator->* is left out. In particular, operator[] has many similarities. It is binary (they missed a golden opportunity to make it n-ary), and it accepts some kind of container on the left and some kind of locator on the right. Its special-rules section, 13.5.5, doesn't seem to have any actual effect except to outlaw free functions. (And that restriction even precludes support for commutativity!) So, for example, this is perfectly legal (in C++0x, remove obvious stuff to translate to C++03): #include <utility> #include <iostream> #include <type_traits> using namespace std; template< class F, class S > typename common_type< F,S >::type operator->*( pair<F,S> const &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( pair<T,T> &l, bool r ) { return r? l.second : l.first; } template< class T > T & operator->*( bool l, pair<T,T> &r ) { return l? r.second : r.first; } int main() { auto x = make_pair( 1, 2.3 ); cerr << x->*false << " " << x->*4 << endl; auto y = make_pair( 5, 6 ); y->*(0) = 7; y->*0->*y = 8; // evaluates to 7->*y = y.second cerr << y.first << " " << y.second << endl; } I can certainly imagine myself giving into temp[la]tation. For example, scaled indexes for vector: v->*matrix_width[5] = x; Did the standards committee forget to prevent this, was it considered too ugly to bother, or are there real-world use cases?

    Read the article

  • Could not load file or assembly 'Base' or one of its dependencies. Access is denied

    - by starcorn
    I have deployed one web project into Azure emulator. And I get this error saying that Could not load file or assembly Base. The thing is that this web project, got dependencies to another project in the same solution. I have added that dependency into the reference list of my web project. And if I run this web application without using the azure emulator it will run fine. But I will get error when I try to run it on the azure emulator. At first glance I thought that I maybe need to add the other project as role also. But it couldn't be that. Anyone know what the problem might be? I hope I got enough data for you to look into. My solution structure looks like following Solution Base WebAPI WebAPI.Azure And it is the WebAPI that has a dependency to the Base project Here's the Assembly load trace WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. And stack trace [FileLoadException: Could not load file or assembly 'Base' or one of its dependencies. Access is denied.] System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) +0 System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) +567 System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +192 System.Reflection.Assembly.Load(String assemblyString) +35 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +123 [ConfigurationErrorsException: Could not load file or assembly 'Base' or one of its dependencies. Access is denied.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +11568160 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +485 System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() +79 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +337 System.Web.Compilation.BuildManager.CallPreStartInitMethods() +280 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) +1167 [HttpException (0x80004005): Could not load file or assembly 'Base' or one of its dependencies. Access is denied.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +11700896 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +141 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +4869125

    Read the article

  • Flex/Air/AS3 Selecting and Populating a unFocused tab.

    - by Deyon
    I'm having a problem displaying data from a function to text box within a tab. If you run the code and click "Select Tab 2 and Fill..." I get an error; "TypeError: Error #1009: Cannot access a property or method of a null object reference." I'm guessing this is because "Tab 2" is/was not rendered yet. Now if I run the code, select "Tab 2" then select "Tab 1" and click "Select Tab 2 and Fill..." it works the way I would like. Dose any one know a way around this problem. ----Full Flex 4/Flash Builder Code just copy paste---- <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" creationComplete=" "> <fx:Script> <![CDATA[ public function showtab2():void { mytextbox.text="I made it!"; tn.selectedIndex=1; } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <mx:Panel title="TabNavigator Container Example" height="90%" width="90%" paddingTop="10" paddingLeft="10" paddingRight="10" paddingBottom="10"> <mx:Label width="100%" color="blue" text="Select the tabs to change the panel."/> <mx:TabNavigator id="tn" width="100%" height="100%"> <!-- Define each panel using a VBox container. --> <mx:VBox label="Panel 1"> <mx:Label text="TabNavigator container panel 1"/> <mx:Button label="Select Tab 2 and Fill with Text" click="showtab2()"/> </mx:VBox> <mx:VBox label="Panel 2"> <mx:Label text="TabNavigator container panel 2"/> <s:TextInput id="mytextbox" /> </mx:VBox> </mx:TabNavigator> <mx:HBox> </mx:HBox> </mx:Panel> </s:WindowedApplication>

    Read the article

  • Abstract mapping with custom JiBX marshaller

    - by aweigold
    I have created a custom JiBX marshaller and verified it works. It works by doing something like the following: <binding xmlns:tns="http://foobar.com/foo" direction="output"> <namespace uri="http://foobar.com/foo" default="elements"/> <mapping class="java.util.HashMap" marshaller="com.foobar.Marshaller1"/> <mapping name="context" class="com.foobar.Context"> <structure field="configuration"/> </mapping> </binding> However I need to create multiple marshallers for different HashMaps. So I tried to reference it with abstract mapping like this: <binding xmlns:tns="http://foobar.com/foo" direction="output"> <namespace uri="http://foobar.com/foo" default="elements"/> <mapping abstract="true" type-name="configuration" class="java.util.HashMap" marshaller="com.foobar.Marshaller1"/> <mapping abstract="true" type-name="overrides" class="java.util.HashMap" marshaller="com.foobar.Marshaller2"/> <mapping name="context" class="com.foobar.Context"> <structure map-as="configuration" field="configuration"/> <structure map-as="overrides" field="overrides"/> </mapping> </binding> However when doing so, when I attempt to build the binding, I receive the following: Error during code generation for file "E:\project\src\main\jibx\foo.jibx" - this may be due to an error in your binding or classpath, or to an error in the JiBX code My guess is that either I'm missing something I need to implement to enable my custom marshaller for abstract mapping, or custom marshallers do not support abstract mapping. I have found the IAbstractMarshaller interface in the JiBX API (http://jibx.sourceforge.net/api/org/jibx/runtime/IAbstractMarshaller.html), however the documentation seems unclear to me on if this is what I need to implement, as well as how it works if so. I have not been able to find an implementation of this interface to work off of as an example. My question is, how do you do abstract mapping with custom marshallers (if it's possible)? If it is done via the IAbstractMarshaller interface, how does it work and/or how should I implement it?

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >