Search Results

Search found 2287 results on 92 pages for 'reads'.

Page 80/92 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • problem with two .NET threads and hardware access

    - by mack369
    I'm creating an application which communicates with the device via FT2232H USB/RS232 converter. For communication I'm using FTD2XX_NET.dll library from FTDI website. I'm using two threads: first thread continuously reads data from the device the second thread is the main thread of the Windows Form Application I've got a problem when I'm trying to write any data to the device while the receiver's thread is running. The main thread simply hangs up on ftdiDevice.Write function. I tried to synchronize both threads so that only one thread can use Read/Write function at the same time, but it didn't help. Below code responsible for the communication. Note that following functions are methods of FtdiPort class. Receiver's thread private void receiverLoop() { if (this.DataReceivedHandler == null) { throw new BackendException("dataReceived delegate is not set"); } FTDI.FT_STATUS ftStatus = FTDI.FT_STATUS.FT_OK; byte[] readBytes = new byte[this.ReadBufferSize]; while (true) { lock (FtdiPort.threadLocker) { UInt32 numBytesRead = 0; ftStatus = ftdiDevice.Read(readBytes, this.ReadBufferSize, ref numBytesRead); if (ftStatus == FTDI.FT_STATUS.FT_OK) { this.DataReceivedHandler(readBytes, numBytesRead); } else { Trace.WriteLine(String.Format("Couldn't read data from ftdi: status {0}", ftStatus)); Thread.Sleep(10); } } Thread.Sleep(this.RXThreadDelay); } } Write function called from main thread public void Write(byte[] data, int length) { if (this.IsOpened) { uint i = 0; lock (FtdiPort.threadLocker) { this.ftdiDevice.Write(data, length, ref i); } Thread.Sleep(1); if (i != (int)length) { throw new BackendException("Couldnt send all data"); } } else { throw new BackendException("Port is closed"); } } Object used to synchronize two threads static Object threadLocker = new Object(); Method that starts the receiver's thread private void startReceiver() { if (this.DataReceivedHandler == null) { return; } if (this.IsOpened == false) { throw new BackendException("Trying to start listening for raw data while disconnected"); } this.receiverThread = new Thread(this.receiverLoop); //this.receiverThread.Name = "protocolListener"; this.receiverThread.IsBackground = true; this.receiverThread.Start(); } The ftdiDevice.Write function doesn't hang up if I comment following line: ftStatus = ftdiDevice.Read(readBytes, this.ReadBufferSize, ref numBytesRead);

    Read the article

  • boost::asio buffer impossible to convert parameter from char to const mutable_buffer&

    - by Ekyo777
    visual studio tells me "error C2664: 'boost::asio::mutable_buffer::mutable_buffer(const boost::asio::mutable_buffer&)': impossible to convert parameter 1 from 'char' to 'const boost::asio::mutable_buffer&' at line 163 of consuming_buffers.hpp" I am unsure of why this happen nor how to solve it(otherwise I wouldn't ask this ^^') but I think it could be related to those functions.. even tough I tried them in another project and everything worked fine... but I can hardly find what's different so... here comes code that could be relevant, if anything useful seems to be missing I'll be glad to send it. packets are all instances of this class. class CPacketBase { protected: const unsigned short _packet_type; const size_t _size; char* _data; public: CPacketBase(unsigned short packet_type, size_t size); ~CPacketBase(); size_t get_size(); const unsigned short& get_type(); virtual char* get(); virtual void set(char*); }; this sends a given packet template <typename Handler> void async_write(CPacketBase* packet, Handler handler) { std::string outbuf; outbuf.resize(packet->get_size()); outbuf = packet->get(); boost::asio::async_write( _socket , boost::asio::buffer(outbuf, packet->get_size()) , handler); } this enable reading packets and calls a function that decodes the packet's header(unsigned short) and resize the buffer to send it to another function that reads the real data from the packet template <typename Handler> void async_read(CPacketBase* packet, Handler handler) { void (CTCPConnection::*f)( const boost::system::error_code& , CPacketBase*, boost::tuple<Handler>) = &CTCPConnection::handle_read_header<Handler>; boost::asio::async_read(_socket, _buffer_data , boost::bind( f , this , boost::asio::placeholders::error , packet , boost::make_tuple(handler))); } and this is called by async_read once a packet is received template <typename Handler> void handle_read_header(const boost::system::error_code& error, CPacketBase* packet, boost::tuple<Handler> handler) { if (error) { boost::get<0>(handler)(error); } else { // Figures packet type unsigned short packet_type = *((unsigned short*) _buffer_data.c_str()); // create new packet according to type delete packet; ... // read packet's data _buffer_data.resize(packet->get_size()-2); // minus header size void (CTCPConnection::*f)( const boost::system::error_code& , CPacketBase*, boost::tuple<Handler>) = &CTCPConnection::handle_read_data<Handler>; boost::asio::async_read(_socket, _buffer_data , boost::bind( f , this , boost::asio::placeholders::error , packet , handler)); } }

    Read the article

  • Missing taglibrary in Netbeans Hibernate tutorial xhtml file?

    - by Christopher W. Allen-Poole
    I just finished the Netbeans introduction to Hibernate tutorial ( http://netbeans.org/kb/docs/web/hibernate-webapp.html#01 ) and I am getting the following error: "This page calls for XML namespace declared with prefix br but no taglibrary exists" Now, I have seen a similar question somewhere else: http://forums.sun.com/thread.jspa?threadID=5430327 but the answer is not listed there. Or, if it is, then I am clearly missing it -- line one of my index.xhtml file reads "http://www.w3.org/1999/xhtml". It also does not explain why, when I reload localhost:8080, the message disappears. Here is my index.xhtml file: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:f="http://java.sun.com/jsf/core"> <ui:composition template="./template.xhtml"> <ui:define name="body"> <h:form> <h:commandLink action="#{filmController.previous}" value="Previous #{filmController.pageSize}" rendered="#{filmController.hasPreviousPage}"/> <h:commandLink action="#{filmController.next}" value="Next #{filmController.pageSize}" rendered="#{filmController.hasNextPage}"/> <h:dataTable value="#{filmController.filmTitles}" var="item" border="0" cellpadding="2" cellspacing="0" rowClasses="jsfcrud_odd_row,jsfcrud_even_row" rules="all" style="border:solid 1px"> <h:column> <f:facet name="header"> <h:outputText value="Title"/> </f:facet> <h:outputText value="#{item.title}"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Description"/> </f:facet> <h:outputText value="#{item.description}"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value=" "/> </f:facet> <h:commandLink action="#{filmController.prepareView}" value="View"/> </h:column> </h:dataTable> <br/> </h:form> </ui:define> </ui:composition> </html>

    Read the article

  • help with making a password checker in java

    - by Cheesegraterr
    Hello, I am trying to make a program in Java that checks for three specific inputs. It has to be 1. At least 7 characters. 2. Contain both upper and lower case alphabetic characters. 3. Contain at least 1 digit. So far I have been able to make it check if there is 7 characters, but I am having trouble with the last two. What should I put in my loop as an if statement to check for digits and make it upper and lower case. Any help would be greatly appreciated. Here is what I have so far. import java.awt.*; import java.io.*; import java.util.StringTokenizer; public class passCheck { private static String getStrSys () { String myInput = null; //Store the String that is read in from the command line BufferedReader mySystem; //Buffer to store the input mySystem = new BufferedReader (new InputStreamReader (System.in)); //creates a connection to system input try { myInput = mySystem.readLine (); //reads in data from the console myInput = myInput.trim (); } catch (IOException e) //check { System.out.println ("IOException: " + e); return ""; } return myInput; //return the integer to the main program } //**************************************** //main instructions go here //**************************************** static public void main (String[] args) { String pass; //the words the user inputs String temp = ""; //holds temp info int stringLength; //length of string boolean goodPass = false; System.out.print ("Please enter a password: "); //ask for words pass = getStrSys (); //get words from system temp = pass.toLowerCase (); stringLength = pass.length (); //find length of eveyrthing while (goodPass == false) { if (stringLength < 7) { System.out.println ("Your password must consist of at least 7 characters"); System.out.print ("Please enter a password: "); //ask for words pass = getStrSys (); stringLength = pass.length (); goodPass = false; } else if (something to check for digits) { } }

    Read the article

  • Can you find a pattern to sync files knowing only dates and filenames?

    - by Robert MacLean
    Imagine if you will a operating system that had the following methods for files Create File: Creates (writes) a new file to disk. Calling this if a file exists causes a fault. Update File: Updates an existing file. Call this if a file doesn't exist causes a fault. Read File: Reads data from a file. Enumerate files: Gets all files in a folder. Files themselves in this operating system only have the following meta data: Created Time: The original date and time the file was created, by the Create File method. Modified Time: The date and time the file was last modified by the Update File method. If the file has never been modified, this will equal the Create Time. You have been given the task of writing an application which will sync the files between two directories (lets call them bill and ted) on a machine. However it is not that simple, the client has required that The application never faults (see methods above). That while the application is running the users can add and update files and those will be sync'd next time the application runs. Files can be added to either the ted or bill directories. File names cannot be altered. The application will perform one sync per time it is run. The application must be almost entirely in memory, in other words you cannot create a log of filenames and write that to disk and then check that the next time. The exception to point 6 is that you can store date and times between runs. Each date/time is associated with a key labeled A through J (so you have 10 to use) so you can compare keys between runs. There is no way to catch exceptions in the application. Answer will be accepted based on the following conditions: First answer to meet all requirements will be accepted. If there is no way to meet all requirements, the answer which ensures the smallest amount of missed changes per sync will be accepted. A bounty will be created (100 points) as soon as possible for the prize. The winner will be selected one day before the bounty ends. Please ask questions in the comments and I will gladly update and refine the question on those.

    Read the article

  • .NET C# Filestream writing to file and reading the bfile

    - by pythonrg7
    I have a web service that checks a dictionary to see if a file exists and then if it does exist it reads the file, otherwise it saves to the file. This is from a web app. I wonder what is the best way to do this because I occasionally get a FileNotFoundException exception if the same file is accessed at the same time. Here's the relevant parts of the code: String signature; signature = "FILE," + value1 + "," + value2 + "," + value3 + "," + value4; // this is going to be the filename string result; MultipleRecordset mrSummary = new MultipleRecordset(); // MultipleRecordset is an object that retrieves data from a sql server database if (mrSummary.existsFile(signature)) { result = mrSummary.retrieveFile(signature); } else { result = mrSummary.getMultipleRecordsets(System.Configuration.ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString.ToString(), value1, value2, value3, value4); mrSummary.saveFile(signature, result); } Here's the code to see if the file already exists: private static Dictionary dict = new Dictionary(); public bool existsFile(string signature) { if (dict.ContainsKey(signature)) { return true; } else { return false; } } Here's what I use to retrieve if it already exists: try { byte[] buffer; FileStream fileStream = new FileStream(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename, FileMode.Open, FileAccess.Read, FileShare.Read); try { int length = 0x8000; // get file length buffer = new byte[length]; // create buffer int count; // actual number of bytes read JSONstring = ""; while ((count = fileStream.Read(buffer, 0, length)) > 0) { JSONstring += System.Text.ASCIIEncoding.ASCII.GetString(buffer, 0, count); } } finally { fileStream.Close(); } } catch (Exception e) { JSONstring = "{\"error\":\"" + e.ToString() + "\"}"; } If the file doesn't previously exist it saves the JSON to the file: try { if (dict.ContainsKey(filename) == false) { dict.Add(filename, true); } else { this.retrieveFile(filename, ipaddress); } } catch { } try { TextWriter tw = new StreamWriter(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename); tw.WriteLine(JSONstring); tw.Close(); } catch { } Here are the details to the exception I sometimes get from running the above code: System.IO.FileNotFoundException: Could not find file 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75'. File name: 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at com.myname.business.MultipleRecordset.retrieveFile(String filename, String ipaddress)

    Read the article

  • how to read values from the remote OPC Server

    - by Shailesh Jaiswal
    I have created one asp.net web service. In this web service I am using the web method as follows. The web service is related to the OPC ( OLE for process control) public string ReadServerItems(string ServerName) { string txt = ""; ArrayList obj = new ArrayList(); XmlServer Srv = new XmlServer("XDAGW.CS.WCF.Eval.1"); RequestOptions opt = new RequestOptions(); ReadRequestItemList iList = new ReadRequestItemList(); iList.Items = new ReadRequestItem[3]; iList.Items[0] = new ReadRequestItem(); iList.Items[0].ItemName = "ServerInfo.ConnectedClients"; iList.Items[1] = new ReadRequestItem(); iList.Items[1].ItemName = "ServerInfo.TotalGroups"; iList.Items[2] = new ReadRequestItem(); iList.Items[2].ItemName = "EventSources.Area1.Tracking"; ReplyItemList rslt; OPCError[] err; ReplyBase reply = Srv.Read(opt, iList, out rslt, out err); if ((rslt == null)) txt += err[0].Text; else { foreach (xmldanet.xmlda.ItemValue iv in rslt.Items) { txt += iv.ItemName; if (iv.ResultID == null) // success { txt += " = " + iv.Value.ToString() + "\r\n"; obj.Add(txt); } else txt += " : Error: " + iv.ResultID.Name + "\r\n"; } } return txt; } I am using the namespaces as follows using xmldanet; using xmldanet.xmlda; I have installed XMLDA.NET client component evaluation. In this there is an in built Test client which successfully reads the values of these data items from the remote OPC server. I also provides the template through which we can build the OPC based applications. In the above code I am trying to read the values of the data items but i am not able to read the values. I have applied the breakpoint. In that I can see that the condition if (iv.ResultID == null) becomes false & also there is null values in the variable rslt. Please tell me where I am doing mistake ? how should I correct my mistake ? can provide me the correct code ?

    Read the article

  • Java: Reading a pdf file from URL into Byte array/ByteBuffer in an applet.

    - by Pol
    I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again. I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below. public static ByteBuffer getAsByteArray(URL url) throws IOException { ByteArrayOutputStream tmpOut = new ByteArrayOutputStream(); URLConnection connection = url.openConnection(); int contentLength = connection.getContentLength(); InputStream in = url.openStream(); byte[] buf = new byte[512]; int len; while (true) { len = in.read(buf); if (len == -1) { break; } tmpOut.write(buf, 0, len); } tmpOut.close(); ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0, tmpOut.size()); //Lines below used to test if file is corrupt //FileOutputStream fos = new FileOutputStream("C:\\abc.pdf"); //fos.write(tmpOut.toByteArray()); return bb; } I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks. Edit: To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail. This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size. I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this. I'll keep at it, and I appreciate all the suggestions so far.

    Read the article

  • Writing reports with Perl

    - by georgemp
    Hi, I am trying to write out multiple report files using perl. Each file has the same structure, but with different data. So, my basic code looks something like #begin code our $log_fh; open %log_fh, ">" . $logfile our $rep; if (multipleReports) { while (@reports) { printReport($report[0]); } } sub printReports { open $rep, ">" . $[0]; printHeaders(); printBody(); close $rep; } sub printHeader() { format HDR = @>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $generatedLine . format HDR_TOP = . $rep->format_name("HDR"); $rep->format_top_name("HDR_TOP"); $generatedLine = "test"; write($rep); $generatedLine = "next item"; write($rep); $generatedLine = "last header item"; write($rep); } sub printBody #There are multiple such sections in my code. For simplicity, I have just shown 1 here { #declare own header and header top. Set report to use these and print items to $rep } #end code The above is just a high level of the code I am using and I hope I have captured all the salient points. However, for some reason, I get the first report file output correctly. The second file instead of having in the first section test next item last item reads last item last item last item I have tried a whole lot of options primarily around autoflush, but, for the life of me can't figure out why it is doing this. I am using Perl 5.8.2. Any help/pointers much appreciated. Thanks George

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • How do I read Unicode characters from an MS Access 2007 database through Java?

    - by Peter
    In Java, I have written a program that reads a UTF8 text file. The text file contains a SQL query of the SELECT kind. The program then executes the query on the Microsoft Access 2007 database and writes all fields of the first row to a UTF8 text file. The problem I have is when a row is returned that contains unicode characters, such as "?". These characters show up as "?" in the text file. I know that the text files are read and written correctly, because a dummy UTF8 character ("?") is read from the text file containing the SQL query and written to the text file containing the resulting row. The UTF8 character looks correct when the written text file is opened in Notepad, so the reading and writing of the text files are not part of the problem. This is how I connect to the database and how I execute the SQL query: ---- START CODE Connection c = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/database.accdb;Pwd=temp"); ResultSet r = c.createStatement().executeQuery(sql); ---- END CODE I have tried making a charSet property to the Connection but it makes no difference: ---- START CODE Properties p = new Properties(); p.put("charSet", "utf-8"); p.put("lc_ctype", "utf-8"); p.put("encoding", "utf-8"); Connection c = DriverManager.getConnection("...", p); ---- END CODE Tried with "utf8"/"UTF8"/"UTF-8", no difference. If I enter "UTF-16" I get the following exception: "java.lang.IllegalArgumentException: Illegal replacement". Been searching around for hours with no results and now turn my hope to you. Please help! I also accept workaround suggestions. =) What I want to be able to do is to make a Unicode query (for example one that searches for posts that contain the "?" character) and to have results with Unicode characters receieved and saved correctly. Thank you!

    Read the article

  • Remove a tag type from the view (involves alphabetical pagination)

    - by user284194
    I have an index view that lists all of the tags for my Entry and Message models. I would like to only show the tags for Entries in this view. I'm using acts-as-taggable-on. Tags Controller: def index @letter = params[:letter].blank? ? 'a' : params[:letter] @tagged_entries = Tagging.find_all_by_taggable_type('Entry').map(&:taggable) @title = "Tags" if params[:letter] == '#' @data = Tag.find(@tagged_entries, :conditions => ["name REGEXP ?", "^[^a-z]"], :order => 'name', :select => "id, name") else @data = Tag.find(@tagged_entries, :conditions => ["name LIKE ?", "#{params[:letter]}%"], :order => 'name', :select => "id, name") end respond_to do |format| flash[:notice] = 'We are currently in Beta. You may experience errors.' format.html end end tags#index: <% @data.each do |t| %> <div class="tag"><%= link_to t.name.titleize, tag_path(t) %></div> <% end %> I want to show only the taggable type 'Entry' in the view. Any ideas? Thank you for reading my question. SECOND EDIT: Tags Controller: def index @title = "Tags" @letter = params[:letter].blank? ? 'a' : params[:letter] @taggings = Tagging.find_all_by_taggable_type('Entry', :include => [:tag, :taggable]) @tags = @taggings.map(&:tag).sort_by(&:name).uniq @tagged_entries = @taggings.map(&:taggable)#.sort_by(&:id)#or whatever if params[:letter] == '#' @data = Tag.find(@tags, :conditions => ["name REGEXP ?", "^[^a-z]"], :order => 'name', :select => "id, name") else @data = Tag.find(@tags, :conditions => ["name LIKE ?", "#{params[:letter]}%"], :order => 'name', :select => "id, name") end respond_to do |format| format.html end end tags#index: <% @data.each do |t| %> <div class="tag"><%= link_to t.name.titleize, tag_path(t) %></div> <% end %> Max Williams' code works except when I click on my alphabetical pagination links. The error I'm getting [after I clicked on the G link of the alphabetical pagination] reads: Couldn't find all Tags with IDs (77,130,115,...) AND (name LIKE 'G%') (found 9 results, but was looking for 129) Can anyone point me in the right direction?

    Read the article

  • .NET WinForms INotifyPropertyChanged updates all bindings when one is changed. Better way?

    - by Dave Welling
    In a windows forms application, a property change that triggers INotifyPropertyChanged, will result in the form reading EVERY property from my bound object, not just the property changed. (See example code below) This seems absurdly wasteful since the interface requires the name of the changing property. It is causing a lot of clocking in my app because some of the property getters require calculations to be performed. I'll likely need to implement some sort of logic in my getters to discard the unnecessary reads if there is no better way to do this. Am I missing something? Is there a better way? Don't say to use a different presentation technology please -- I am doing this on Windows Mobile (although the behavior happens on the full framework as well). Here's some toy code to demonstrate the problem. Clicking the button will result in BOTH textboxes being populated even though one property has changed. using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace Example { public class ExView : Form { private Presenter _presenter = new Presenter(); public ExView() { this.MinimizeBox = false; TextBox txt1 = new TextBox(); txt1.Parent = this; txt1.Location = new Point(1, 1); txt1.Width = this.ClientSize.Width - 10; txt1.DataBindings.Add("Text", _presenter, "SomeText1"); TextBox txt2 = new TextBox(); txt2.Parent = this; txt2.Location = new Point(1, 40); txt2.Width = this.ClientSize.Width - 10; txt2.DataBindings.Add("Text", _presenter, "SomeText2"); Button but = new Button(); but.Parent = this; but.Location = new Point(1, 80); but.Click +=new EventHandler(but_Click); } void but_Click(object sender, EventArgs e) { _presenter.SomeText1 = "some text 1"; } } public class Presenter : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _SomeText1 = string.Empty; public string SomeText1 { get { return _SomeText1; } set { _SomeText1 = value; _SomeText2 = value; // <-- To demonstrate that both properties are read OnPropertyChanged("SomeText1"); } } private string _SomeText2 = string.Empty; public string SomeText2 { get { return _SomeText2; } set { _SomeText2 = value; OnPropertyChanged("SomeText2"); } } private void OnPropertyChanged(string PropertyName) { PropertyChangedEventHandler temp = PropertyChanged; if (temp != null) { temp(this, new PropertyChangedEventArgs(PropertyName)); } } } }

    Read the article

  • unformatted input to a std::string instead of c-string from binary file.

    - by posop
    ok i have this program working using c-strings. I am wondering if it is possible to read in blocks of unformatted text to a std::string? I toyed arround with if >> but this reads in line by line. I've been breaking my code and banging my head against the wall trying to use std::string, so I thought it was time to enlist the experts. Here's a working program you need to supply a file "a.txt" with some content to make it run. i tried to fool around with: in.read (const_cast<char *>(memblock.c_str()), read_size); but it was acting odd. I had to do std::cout << memblock.c_str() to get it to print. and memblock.clear() did not clear out the string. anyway, if you can think of a way to use STL I would greatly appreciate it. Here's my program using c-strings // What this program does now: copies a file to a new location byte by byte // What this program is going to do: get small blocks of a file and encrypt them #include <fstream> #include <iostream> #include <string> int main (int argc, char * argv[]) { int read_size = 16; int infile_size; std::ifstream in; std::ofstream out; char * memblock; int completed = 0; memblock = new char [read_size]; in.open ("a.txt", std::ios::in | std::ios::binary | std::ios::ate); if (in.is_open()) infile_size = in.tellg(); out.open("b.txt", std::ios::out | std::ios::trunc | std::ios::binary); in.seekg (0, std::ios::beg);// get to beginning of file while(!in.eof()) { completed = completed + read_size; if(completed < infile_size) { in.read (memblock, read_size); out.write (memblock, read_size); } // end if else // last run { delete[] memblock; memblock = new char [infile_size % read_size]; in.read (memblock, infile_size % read_size + 1); out.write (memblock, infile_size % read_size ); } // end else } // end while } // main if you see anything that would make this code better please feel free to let me know.

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Logging in with WebFinger and OpenID

    - by Ryan
    I would like to apologize in advance for the ugly formatting. In order to talk about the problem, I need to be posting a bunch of URLs, but the excessive URLs and my lack of reputation makes StackOverflow think I could be a spammer. Any instance of 'ht~tp' is supposed to be 'http'. '{dot}' is supposed to be '.' and '{colon}' is supposed to be ':'. Also, my lack of reputation has prevented me from tagging my question with 'webfinger' and 'google-profiles'. Onto my question: I am messing around with WebFinger and trying to create a small rails app that enables a user to log in using nothing but their WebFinger account. I can succesfully finger myself, and I get back an XRD file with the following snippet: Link rel="ht~tp://specs{dot}openid{dot}net/auth/2.0/provider" href="ht~tp://www{dot}google{dot}com/profiles/{redacted}"/ Which, to me, reads, "I have an OpenID 2.0 login at the url: ht~tp://www{dot}google{dot}com/profiles/{redacted}". But when I try to use that URL to log in, I get the following error OpenID::DiscoveryFailure (Failed to fetch identity URL ht~tp://www{dot}google{dot}com/profiles/{redacted} : Error encountered in redirect from ht~tp://www{dot}google{dot}com/profiles/{redacted}: Error fetching /profiles/{Redacted}: Connection refused - connect(2)): When I replace the profile URL with 'ht~tps://www{dot}google{dot}com/accounts/o8/id', the login works perfectly. here is the code that I am using (I'm using RedFinger as a plugin, and JanRain's ruby-openid, installed without the gem) require "openid" require 'openid/store/filesystem.rb' class SessionsController < ApplicationController def new @session = Session.new #render a textbox requesting a webfinger address, and a submit button end def create ####################### # # Pay Attention to this section right here # ####################### #use given webfinger address to retrieve openid login finger = Redfinger.finger(params[:session][:webfinger_address]) openid_url = finger.open_id.first.to_s #openid_url is now: ht~tp://www{dot}google{dot}com/profiles/{redacted} #Get needed info about the acquired OpenID login file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.begin(openid_url) #ERROR HAPPENS HERE #send user to OpenID login for verification redirect_to response.redirect_url('ht~tp://localhost{colon}3000/','ht~tp://localhost{colon}3000/sessions/complete') end def complete #interpret return parameters file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.complete params case response.status when OpenID::SUCCESS session[:openid] = response.identity_url #redirect somehwere here end end end Is it possible for me to use the URL I received from my WebFinger to log in with OpenID?

    Read the article

  • SIGSEGV problem

    - by sickmate
    I'm designing a protocol (in C) to implement the layered OSI network structure, using cnet (http://www.csse.uwa.edu.au/cnet/). I'm getting a SIGSEGV error at runtime, however cnet compiles my source code files itself (I can't compile it through gcc) so I can't easily use any debugging tools such as gdb to find the error. Here's the structures used, and the code in question: typedef struct { char *data; } DATA; typedef struct { CnetAddr src_addr; CnetAddr dest_addr; PACKET_TYPE type; DATA data; } Packet; typedef struct { int length; int checksum; Packet datagram; } Frame; static void keyboard(CnetEvent ev, CnetTimerID timer, CnetData data) { char line[80]; int length; length = sizeof(line); CHECK(CNET_read_keyboard((void *)line, (unsigned int *)&length)); // Reads input from keyboard if(length > 1) { /* not just a blank line */ printf("\tsending %d bytes - \"%s\"\n", length, line); application_downto_transport(1, line, &length); } } void application_downto_transport(int link, char *msg, int *length) { transport_downto_network(link, msg, length); } void transport_downto_network(int link, char *msg, int *length) { Packet *p; DATA *d; p = (Packet *)malloc(sizeof(Packet)); d = (DATA *)malloc(sizeof(DATA)); d->data = msg; p->data = *d; network_downto_datalink(link, (void *)p, length); } void network_downto_datalink(int link, Packet *p, int *length) { Frame *f; // Encapsulate datagram and checksum into a Frame. f = (Frame *)malloc(sizeof(Frame)); f->checksum = CNET_crc32((unsigned char *)(p->data).data, *length); // Generate 32-bit CRC for the data. f->datagram = *p; f->length = sizeof(f); //Pass Frame to the CNET physical layer to send Frame to the require link. CHECK(CNET_write_physical(link, (void *)f, (size_t *)f->length)); free(p->data); free(p); free(f); } I managed to find that the line: CHECK(CNET_write_physical(link, (void *)f, (size_t *)f-length)); is causing the segfault but I can't work out why. Any help is greatly appreciated.

    Read the article

  • Android problem: BufferedReader wont read whole stream into a string

    - by Levara
    Hi all! I'm making an android program that retrieves content of a webpage using HttpURLConnection. I'm new to both Java and Android. Problem is: Reader reads whole page source, but in the last while iteration it doesn't append to stringBuffer that last part. Using debbuger I have determined that, in the last loop iteration, string buff is created, but stringBuffer just doesnt append it. I need to parse retrieved content. Is there any better way to handle the content for parsing than using strings. I've read on numerous other sites that string size in Java is limited only by available heap size. Anyone know what could be the problem. Btw feel free to suggest any improvements to the code. Thanks! URL u; try { u = new URL("http://feeds.timesonline.co.uk/c/32313/f/440134/index.rss"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); c.setRequestProperty("User-agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)"); c.setRequestMethod("GET"); c.setDoOutput(true); c.setReadTimeout(3000); c.connect(); StringBuffer stringBuffer = new StringBuffer(""); InputStream in = c.getInputStream(); InputStreamReader inp = new InputStreamReader(in); BufferedReader reader = new BufferedReader(inp); char[] buffer = new char[3072]; int len1 = 0; while ( (len1 = reader.read(buffer)) != -1 ) { String buff = new String(buffer,0,len1); stringBuffer.append(buff); } String stranica = new String(stringBuffer); c.disconnect(); reader.close(); inp.close(); in.close();

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • What could cause a Labwindows/CVI C program to hate the number 2573?

    - by Adam Bard
    Using Windows So I'm reading from a binary file a list of unsigned int data values. The file contains a number of datasets listed sequentially. Here's the function to read a single dataset from a char* pointing to the start of it: function read_dataset(char* stream, t_dataset *dataset){ //...some init, including setting dataset->size; for(i=0;i<dataset->size;i++){ dataset->samples[i] = *((unsigned int *) stream); stream += sizeof(unsigned int); } //... } Where read_dataset in such a context as this: //... char buff[10000]; t_dataset* dataset = malloc( sizeof( *dataset) ); unsigned long offset = 0; for(i=0;i<number_of_datasets; i++){ fseek(fd_in, offset, SEEK_SET); if( (n = fread(buff, sizeof(char), sizeof(*dataset), fd_in)) != sizeof(*dataset) ){ break; } read_dataset(buff, *dataset); // Do something with dataset here. It's screwed up before this, I checked. offset += profileSize; } //... Everything goes swimmingly until my loop reads the number 2573. All of a sudden it starts spitting out random and huge numbers. For example, what should be ... 1831 2229 2406 2637 2609 2573 2523 2247 ... becomes ... 1831 2229 2406 2637 2609 0xDB00000A 0xC7000009 0xB2000008 ... If you think those hex numbers look suspicious, you're right. Turns out the hex values for the values that were changed are really familiar: 2573 -> 0xA0D 2523 -> 0x9DB 2247 -> 0x8C7 So apparently this number 2573 causes my stream pointer to gain a byte. This remains until the next dataset is loaded and parsed, and god forbid it contain a number 2573. I have checked a number of spots where this happens, and each one I've checked began on 2573. I admit I'm not so talented in the world of C. What could cause this is completely and entirely opaque to me.

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • casting, converting, and input from textbox controls

    - by Matt
    Working on some .aspx.cs code and decided I would forget how to turn a textbox value into a useable integer or decimal. Be warned I'm pretty new to .asp. Wish I could say the same for c sharp. So the value going into my textbox (strawberryp_textbox) is "1" which I presume I can access with the .text property. Which I then parse into a int. The Error reads Format Exception was unhandled by user code. My other question is can I do operations on a session variable? protected void submit_order_button_Click(object sender, EventArgs e) { int strawberryp; int strawberrys; decimal money1 = decimal.Parse(moneybox1.Text); decimal money2 = decimal.Parse(moneybox2.Text); decimal money3 = decimal.Parse(moneybox3.Text); decimal money4 = decimal.Parse(moneybox4.Text); decimal money5 = decimal.Parse(moneybox5.Text); strawberryp = int.Parse(strawberryp_Textbox.Text); //THE PROBLEM RIGHT HERE! strawberrys = int.Parse(strawberrys_Textbox.Text); // Needs fixed int strawberryc = int.Parse(strawberryc_Textbox.Text); //fix int berryp = int.Parse(berryp_Textbox.Text); //fix int raspberryp = int.Parse(raspberryp_Textbox.Text); /fix decimal subtotal = (money1 * strawberryp) + (money2 * strawberrys) + (money3 * strawberryc) + (money4 * berryp) + (money5 * raspberryp); //check to see if you can multiply decimal and int to get a deciaml!! Session["passmysubtotal"] = subtotal; //TextBox2.Text; (strawberryp_Textbox.Text);//TextBox4.Text; add_my_order_button.Enabled = true; add_my_order_button.Visible = true; submit_order_button.Enabled = false; submit_order_button.Visible = false; strawberryp_Textbox.ReadOnly = false; strawberrys_Textbox.ReadOnly = false; strawberryc_Textbox.ReadOnly = false; berryp_Textbox.ReadOnly = false; raspberryp_Textbox.ReadOnly = false; Response.Redirect("reciept.aspx"); } Thanks for the help

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >