Search Results

Search found 2287 results on 92 pages for 'reads'.

Page 72/92 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Socket isn't listed by netstat unless using certain ports

    - by illuzive
    I'm a computer science student with a few years of programming experience. Yesterday, while working on a project (Mac OS X, BSD sockets) at school, I encountered a strange problem. I was adding several modules to a very basic "server" (mostly a bunch of functions to set up and manage an UDP socket on a certain port). While doing this, I started the server from time to time in order to see that everything worked like it should. I've been using port 32000 during the development of the server. When I start the server and run netstat, the socket is listed as expected. > netstat -p UDP | grep 32000 udp46 0 0 *.32000 *.* However, when I run the server on other ports (random (10000 - 50000)), it's not listed by netstat. My thought was that I had somehow hard coded the port somewhere in the code, but that's not the case. The thing is - I can connect to the socket on any of the tested ports, and it reads data sent to it without any problem at all. It just doesn't get listed by netstat. What I wonder, is if anyone of you have any idea of why this happens? Note: Although this is a project at school, it's not homework. This is just something I want to understand for my own benefit.

    Read the article

  • I need to sort php jquery gallery script alphabetically

    - by David Cahill
    know nothing about php, but I have this script that reads a folder and displays a thumbnail gallery, problem is it dosent display alphabetically. Have searched the net and seen that sort does this but have no idea where to start any help would be much appreciated. heres the script $sitename = $row_wigsites['id']; $directory = 'sites/'.$sitename.'/pans'; $allowed_types=array('jpg','jpeg','gif','png'); $file_parts=array(); $ext=''; $title=''; $i=0; $dir_handle = @opendir($directory) or die("There is an error with your image directory!"); while ($file = readdir($dir_handle)) { if($file=='.' || $file == '..') continue; $file_parts = explode('.',$file); $ext = strtolower(array_pop($file_parts)); $title = implode('.',$file_parts); $title = htmlspecialchars($title); $nomargin=''; if(in_array($ext,$allowed_types)) { if(($i+1)%4==0) $nomargin='nomargin'; echo ' <div class="pic '.$nomargin.'" style="background:url('.$directory.'/'.$file.') no-repeat 50% 50%;"> <a href="'.$directory.'/'.$file.'" title="Panoramic Stills taken at '.$title.'°" rel="pan1" target="_blank">'.$title.'</a> </div>'; $i++; } } closedir($dir_handle);

    Read the article

  • Datareceived Serialport event stops raising after some seconds

    - by Mario
    Hi, I was hoping someone could help me out with this problem. I have a system (VB .NET) where I must read a person's weight (RS232 Sluice) and id (Fingerprint - 2 biometric reader, rs232) and compare it to a database. I have 3 serialports in my app, one for the sluice and the other 2 are to receive the id from the fingerprint readers, both of which call the same sub to get the id from the reader. I've been testing just one reader and it seemed to work fine, I got data from the datareceived and joined it together to get the id. The problem comes at this moment: I put a finger, sends the id, if it's ok sends a message, otherwise, writes the id to a textbox. But in-between reads, if I let 5 or 10 seconds pass without putting a finger on the reader it seems like I get no data at all anymore, the datareceived event nevers gets raised but if I keep putting a finger constantly it works pretty good, this is really weird to me. I was thinking of some things: **Maybe the port gets closed somehow after some time? I never call the CLose() method **The fact both datareceived eventhandlers call the same method and delegate **Maybe the connection settings are missing something? I tested with hyperterminal and the port keeps getting info even after time without activity and I use the same config with my application, maybe I need to change more settings like DTEenable and RTSenable? Please I need some help with this issue, it's to control access so it needs to be running 24/7 thanks in advance!

    Read the article

  • Why Does My Vector<PEVENTLOGRECORD> Mysteriously Get Cleared?

    - by Eric
    Hello everyone, I am making a program that reads and stores data from Windows EventLog files (.evt) in C++. I am using the calls OpenBackupEventLog(ServerName, FileName) and ReadEventLog(...). Also using this: PEVENTLOGRECORD Anyway, without supplying all of the code, here is the basic idea: 1. I get a handle to the .evt file using OpenBackupEventLog() and passing in a file name. 2. I then use ReadEventLog() to fill up a buffer with an unknown number of EventLog messages. 3. I traverse through the buffer and add each message to a vector 4. I keep filling up buffers (repeat steps 2 and 3) until I reach the end of the file. Here is my code for filling the vector: vector<PEVENTLOGRECORD> allRecords; while(_status == ERROR_SUCCESS) { if(!ReadEventLog(...)) CheckStatus(); else FillVectorFromBuffer(allRecords) } // Function FillVectorFromBuffer FillVectorFromBuffer(vector(PEVENTLOGRECORD) &allRecords) { int bytesExamined = 0; PBYTE pRecord = (PBYTE)_lpBuffer; // This is one of the params in ReadEventLog() while(bytesExamined < _pnBytesRead) // Another param from ReadEventLog { PEVENTLOGRECORD currentRecord = (PEVENTLOGRECORD)(pRecord); allRecords.push_back(currentRecord); pRecord += currentRecord->Length; bytesExamined += currentRecord->Length; } } Anyway, whenever I run this, it will get all the EventLogs in the file, and the vector will have everything I want it to. But as soon as this line: if(!ReadEventLog()) gets called and returns true (aka ReadEventLog() returns false), then every field in my vector gets set to zero. The vector will still contain the correct number of elements, it's just that all of the fields in the PEVENTLOGRECORD struct are now zero. Anyone with better debugging experience have any ideas? Thanks.

    Read the article

  • Why does the JSF action tag handler in JSP invoke rendering immediately after creation?

    - by Pentius
    Dear fellows, I read the article "Improving JSF by Dumping JSP" from Hans Bergsten. There I read the following: The JSP container processes the page and invokes the JSF action tag handlers as they are encountered. A JSF tag handler looks for the JSF component it represents in the component tree. If it can't find the component, it creates it and adds it to the component tree. It then asks the component to render itself. and furthermore On the first request, the action creates its component and asks it to render itself. I understand that the immediate rendering after the creation of the component is the problem here (The reference to the input component can't be resolved in the example). That's one point, why JSF doesn't fit with JSP. But it reads as if the action tag handler itself would ask the component to render. Or is it JSP that triggers the rendering directly after the action tag handler created the component. If it is the action tag handler, I don't understand, why this is the fault of JSP. What is different here than from JSF intended? Thanks for your help, I need this for my thesis.

    Read the article

  • How do I keep my DataService up to date with ObservableCollection?

    - by joebeazelman
    I have a class called CustomerService which simply reads a collection of customers from a file or creates one and passes it back to the Main Model View where it is turned into an ObservableCollection. What the best practice for making sure the items in the CustomerService and ObservableCollection are in sync. I'm guessing I could hookup the CustomerService object to respond to RaisePropertyChanged, but isn't this only for use with WPF controls? Is there a better way? using System; public class MainModelView { public MainModelView() { _customers = new ObservableCollection<CustomerViewModel>(new CustomerService().GetCustomers()); } public const string CustomersPropertyName = "Customers" private ObservableCollection<CustomerViewModel> _customers; public ObservableCollection<CustomerViewModel> Customers { get { return _customers; } set { if (_customers == value) { return; } var oldValue = _customers; _customers = value; // Update bindings and broadcast change using GalaSoft.MvvmLight.Messenging RaisePropertyChanged(CustomersPropertyName, oldValue, value, true); } } } public class CustomerService { /// <summary> /// Load all persons from file on disk. /// </summary> _customers = new List<CustomerViewModel> { new CustomerViewModel(new Customer("Bob", "" )), new CustomerViewModel(new Customer("Bob 2", "" )), new CustomerViewModel(new Customer("Bob 3", "" )), }; public IEnumerable<LinkViewModel> GetCustomers() { return _customers; } }

    Read the article

  • how do i insert html file that have hebrew text and read it back in sql server 2008 using the filest

    - by gadym
    hello all, i am new to the filestream option in sql server 2008, but i have already understand how to open this option and how to create a table that allow you to save files. let say my table contains: id,name, filecontent i tried to insert an html file (that has hebrew chars/text in it) to this table. i'm writing in asp.net (c#), using visual studio 2008. but when i tried to read back the content , hebrew char becomes '?'. the actions i took were: 1. i read the file like this: // open the stream reader System.IO.StreamReader aFile = new StreamReader(FileName, System.Text.UTF8Encoding.UTF8); // reads the file to the end stream = aFile.ReadToEnd(); // closes the file aFile.Close(); return stream; // returns the stream i inserted the 'stream' to the filecontent column as binary data. i tried to do a 'select' to this column and the data did return (after i coverted it back to string) but hebrew chars become '?' how do i solve this problem ? what I should pay attention to ? thanks, gadym

    Read the article

  • Passing a paramter/object to a ruby unit/test before running it using TestRunner

    - by Nahir Khan
    I'm building a tool that automates a process then runs some tests on it's own results then goes to do some other stuff. In trying to clean up my code I have created a separate file that just has the test cases class. Now before I can run these tests, I have to pass the class a couple of parameters/objects before they can be run. Now the problem is that I can't seem to find a way to pass a parameter/object to the test class. Right now I am thinking to generate a Yaml file and read it in the test class but it feels "wrong" to use a temporary file for this. If anyone has a nicer solution that would be great! *********Edit******* Example Code of what I am doing right now: #!/usr/bin/ruby require 'test/unit/ui/console/testrunner' require 'yaml' require 'TS_SampleTestSuite' automatingSomething() importantInfo = getImportantInfo() File.open('filename.yml', 'w') do |f| f.puts importantInfo.to_yaml end Test::Unit::UI::Console::TestRunner.run(TS_SampleTestSuite) Now in the example above TS_SampleTestSuite needs importantInfo, so the first "test case" is a method that just reads in the information from the Yaml file filname.yml. I hope that clears up some confusion.

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • binary_search not working for a vector<string>

    - by VaioIsBorn
    #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main(void) { string temp; vector<string> encrypt, decrypt; int i,n, co=0; cin >> n; for(i=0;i<n;i++) { cin >> temp; encrypt.push_back(temp); } for(i=0;i<n;i++) { cin >> temp; decrypt.push_back(temp); } for(i=0;i<n;i++) { temp = encrypt[i]; if((binary_search(decrypt.begin(), decrypt.end(), temp)) == true) ++co; } cout << co << endl; return 0; } It reads two equal lists of strings and should print out how many of the words in the first list are also found in the second list, simple. Not giving me the expexted results and i think the problem is in binary_search. Can you tell me why ?

    Read the article

  • Within an aray of objects can one create a new instance of an object at an index?

    - by David
    Here's the sample code: class TestAO { int[] x; public TestAO () { this.x = new int[5] ; for (int i = 0; i<x.length; i++) x[i] = i; } public static void main (String[]arg) { TestAO a = new TestAO (); System.out.println (a) ; TestAO c = new TestAO () ; c.x[3] = 35 ; TestAO[] Z = new TestAO[3] ; Z[0] = a ; Z[1] = (TestAO b = new TestAO()) ; Z[2] = c ; } } When i try to compile this i get an error message at the line Z[1] which reads as follows: TestAO.java:22: ')' expected Z[1] = (TestAO b = new TestAO()) ; ^ What i'm trying to do here is create an instance of the object TestAO that i want to be in that index within the assignment of the value at that index instead of creating the instance of the object outside of the array like i did with a. Is this legal and i'm just making some syntax error that i can't see (thus causing the error message) or can what i'm trying to do just not be done?

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Drag Drop copy file

    - by Graham Warrender
    I've perhaps done something marginally stupid, but can't see what it is!! string pegasusKey = @"HKEY_LOCAL_MACHINE\SOFTWARE\Pegasus\"; string opera2ServerPath = @"Server VFP\"; string opera3ServerPath = @"O3 Client VFP\"; string opera2InstallationPath = null; string opera3InstallationPath = null; //Gets the opera Installtion paths and reads to the string opera*InstallationPath opera2InstallationPath = (string)Registry.GetValue(pegasusKey + opera2ServerPath + "System", "PathToServerDynamic", null); opera3InstallationPath = (string)Registry.GetValue(pegasusKey + opera3ServerPath + "System", "PathToServerDynamic", null); string Filesource = null; string[] FileList = (string[])e.Data.GetData(DataFormats.FileDrop, false); foreach (string File in FileList) Filesource = File; label.Text = Filesource; if (System.IO.Directory.Exists(opera3InstallationPath)) { System.IO.File.Copy(Filesource, opera3InstallationPath); MessageBox.Show("File Copied from" + Filesource + "\n to" + opera3InstallationPath); } else { MessageBox.Show("Directory Doesn't Exist"); } The user drags the file onto the window, I then get the installation path of an application which is then used as the destination for the source file.. When the application is runs, it throws the error directory not found. But surely if the directory doesn't exists is should step into the else statement? a simple application that is becoming a headache!!

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

  • Decompress a GZipped response from the server (Socket)

    - by Lith
    Umm, ok, after sending some data to the server, noting this particular part: "Accept-Encoding: gzip,deflate\r\n" I am getting the following response: HTTP/1.1 200 OK Server: nginx Date: Fri, 09 Apr 2010 23:25:27 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.2.8 Expires: Mon, 26 Jul 1997 05:00:00 GMT Last-Modified: Fri, 09 Apr 2010 23:25:27 GMT Cache-Control: no-store, no-cache, must-revalidate Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Content-Encoding: gzip Vary: Accept-Encoding 7aa ??U-?Rh?%?2?w??PM]??7?qZ?K?)???2?&??m???"q??/p9w?????x?[`tA!G???G?5z??????a>k????????Q ???N?? ('??f?,(??Y:5B???-?)?3x^0e:j?`,???**???F>G)?2????@???b??????A?k???Ar?n? But how do I decompress it? Note that I am using the Socket Class to do all the work. I know how to decompress it, but the problem here lies in the fact that I cannot separate the Packet from the GZipped data, psuedo-psuedocode (or whatever) on how I do it: Socket sends packet; Socket reads response from server, stores into a ByteArray; Create MemoryStream, use ByteArray; Create GZipStream, use Memorystream; now the problem occurs; I am getting the following Error: System.IO.InvalidDataException The magic number in GZip header is not correct. Make sure you are passing in a GZip stream. I hope the explanation is clear enough __.

    Read the article

  • InvalidOperationException: The Undo operation encountered a context that is different from what was

    - by McN
    I got the following exception: Exception Type: System.InvalidOperationException Exception Message: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). Exception Stack: at System.Threading.SynchronizationContextSwitcher.Undo() at System.Threading.ExecutionContextSwitcher.Undo() at System.Threading.ExecutionContext.runFinallyCode(Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteBackoutCodeHelper(Object backoutCode, Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.ContextAwareResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) Exception Source: mscorlib Exception TargetSite.Name: Undo Exception HelpLink: The application is a Visual Studio 2005 (.Net 2.0) console application. It is a server for multiple TCP/IP connections, doing asynchronous socket reads and synchronous socket writes. In searching for an answer I came across this post which talks about a call to Application.Doevents() which I don't use in my code. I also found this post which has a resolution involved with Component which I also don't use in my code. The application does reference a library that I created that contains custom user controls and components, but they are not being used by the application. Question: What caused this to happen and how do I prevent this from happening again? Or a more realistic question: What does this exception actually mean? How is "context" defined in this situation? Anything that can help me understand what is going on would be very much appreciated.

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • ASP.NET MVC - How to Unit Test boundaries in the Repository pattern?

    - by JK
    Given a basic repository interface: public interface IPersonRepository { void AddPerson(Person person); List<Person> GetAllPeople(); } With a basic implementation: public class PersonRepository: IPersonRepository { public void AddPerson(Person person) { ObjectContext.AddObject(person); } public List<Person> GetAllPeople() { return ObjectSet.AsQueryable().ToList(); } } How can you unit test this in a meaningful way? Since it crosses the boundary and physically updates and reads from the database, thats not a unit test, its an integration test. Or is it wrong to want to unit test this in the first place? Should I only have integration tests on the repository? I've been googling the subject and blogs often say to make a stub that implements the IRepository: public class PersonRepositoryTestStub: IPersonRepository { private List<Person> people = new List<Person>(); public void AddPerson(Person person) { people.Add(person); } public List<Person> GetAllPeople() { return people; } } But that doesnt unit test PersonRepository, it tests the implementation of PersonRepositoryTestStub (not very helpful).

    Read the article

  • How to systematically generate images from data?

    - by adamvickers
    I work for a performing arts nonprofit. We have seating charts for each of the theaters we work with; each seating chart shows the number of sections, the shape of each section, and the number of rows in each section. We'd like to create dynamic seating charts based on this info. We'd like them to look/feel kinda like this: http://www.fansnap.com/tickets/177754-on. But the tricky part is we'd like to be able to store all the info about each theater (the section names, shape/size of each section, and number of rows in each section) as data and then build a system that reads this data and uses it to create a dynamic map. I'm a life-long web developer, but I don't have have any experience with a difficult graphics problem like this. I realize it's a complex problem and I don't expect anyone to give me a complete answer here, but I would love direction on where I should be looking for more info. Is what I'm describing possible? Does this sort of technique have a name? Where can I learn more about how to accomplish this? What software should I use? Any info would be helpful.

    Read the article

  • Mapping relationships from multiple databases in NHibernate

    - by mannish
    I have a multi-database application configured with NHibernate. The entities that correspond to tables from each database are in their own separate assemblies (an assembly per database if you will). I have a need/desire to relate an entity from one database to an entity of another database. Everything up to this point works as I want it to (the application handles multiple session factories, etc.). The relationship I want is many-to-one, but in reality my application only cares about one side of the relationship (for reasons that aren't relevant). The relevant entities are Project and PMProject, where a Project HAS A PMProject. When I map the many-to-one, I get the following error: NHibernate.MappingException: An association from the table PROJECTS refers to an unmapped class: SDMS.PPRM.PMProject The Project mapping itself reads (ignore the funky column naming; it's an Oracle db): <many-to-one name="PMProject" class="SDMS.PPRM.PMProject" column="PM_PROJECT_ID" cascade="none" /> In the class attribute, I'm referencing the appropriate assembly, but I get that error which seems to tell me it simply can't find the mapping file for PMProject. But that file exists (it's set as embedded resource), the session factory instantiation works without fail; so I'm at a loss on how to tell the Project mapping how/where to look for the appropriate mapping. Is there something I'm missing? A better way to go about this? Thanks in advance.

    Read the article

  • Modify this code to read bytes in the reverse endian?

    - by ibiza
    Hi, I have this bit of code which reads an 8 bytes array and converts it to a int64. I would like to know how to tweak this code so it would work when receiving data represented with the reverse endian... protected static long getLong(byte[] b, int off) { return ((b[off + 7] & 0xFFL) >> 0) + ((b[off + 6] & 0xFFL) << 8) + ((b[off + 5] & 0xFFL) << 16) + ((b[off + 4] & 0xFFL) << 24) + ((b[off + 3] & 0xFFL) << 32) + ((b[off + 2] & 0xFFL) << 40) + ((b[off + 1] & 0xFFL) << 48) + (((long) b[off + 0]) << 56); } Thanks for the help!

    Read the article

  • What is the Pythonic way to implement a simple FSM?

    - by Vicky
    Yesterday I had to parse a very simple binary data file - the rule is, look for two bytes in a row that are both 0xAA, then the next byte will be a length byte, then skip 9 bytes and output the given amount of data from there. Repeat to the end of the file. My solution did work, and was very quick to put together (even though I am a C programmer at heart, I still think it was quicker for me to write this in Python than it would have been in C) - BUT, it is clearly not at all Pythonic and it reads like a C program (and not a very good one at that!) What would be a better / more Pythonic approach to this? Is a simple FSM like this even still the right choice in Python? My solution: #! /usr/bin/python import sys f = open(sys.argv[1], "rb") state = 0 if f: for byte in f.read(): a = ord(byte) if state == 0: if a == 0xAA: state = 1 elif state == 1: if a == 0xAA: state = 2 else: state = 0 elif state == 2: count = a; skip = 9 state = 3 elif state == 3: skip = skip -1 if skip == 0: state = 4 elif state == 4: print "%02x" %a count = count -1 if count == 0: state = 0 print "\r\n"

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >