Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 779/1620 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • Java library to encode / decode AMF

    - by Ceilingfish
    Hi chaps, I currently have a Java server that talks to a Flash client by passing JSON encoded data over a binary socket connection. Is there a way on either side to encode / decode packets as AMF instead of JSON? It seems to me that there should be some native support in Flash player for doing this? All the implementations I have found of AMF serialization seem to be embedded inside an application framework. Simiarly so, does anyone know if it's possible to decode AMF packets independently of a connection implementation in Flash?

    Read the article

  • how to omit extra bits in a file?

    - by thinthinyu
    I want to omit extra bit in txt file.eg ....ÿ 0111111110111101100011011010010001 in this string we want to omit extra bit "ÿ " which is appeared when we save a binary string. Save fun is as follow. please help me. void LFSR_ECDlg::Onsave() { this-UpdateData(); CFile bitstream; char strFilter[] = { "Stream Records (*.mpl)|*.mpl| (*.pis)|*.pis|All Files (*.*)|*.*||" }; CFileDialog FileDlg(FALSE, ".mpl", NULL, 0, strFilter); if( FileDlg.DoModal() == IDOK ) { if( bitstream.Open(FileDlg.GetFileName(), CFile::modeCreate | CFile::modeWrite) == FALSE ) return; CArchive ar(&bitstream, CArchive::store); CString txt; txt=""; txt.Format("%s",m_B);//by ANO AfxMessageBox (txt);//by ANO txt=m_B;//by ANO ar <<txt;//by ANO ar.Close(); } else return; bitstream.Close();

    Read the article

  • Symbolicating iPhone App Crash Reports

    - by Jasarien
    I'm looking to try and symbolicate my iPhone app's crash reports. I retrieved the crash reports from iTunes Connect. I have the application binary that I submitted to the App Store and I have the dSYM file that was generated as part of the build. I have all of these files together inside a single directory that is indexed by spotlight. What now? I have tried invoking: symbolicatecrash crashreport.crash myApp.app.dSYM and it just outputs the same text that is in the crash report to start with, not symbolicated. Am I doing something wrong? Any help would be greatly appreciated, thanks.

    Read the article

  • Problem running gems in OS X

    - by akarnid
    I'm running Snow Leopard, and installed a custom built Ruby according to the guide here: http://hivelogic.com/articles/compiling-ruby-rubygems-and-rails-on-snow-leopard . My ruby binary lives in usr/local/bin/ruby and my gems are installed in /usr/local/bin/gem . My gem env looks like so: RUBY VERSION: 1.8.7 (2008-08-11 patchlevel 72) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin I think I may have borked the install since all actions taked on gems give the error: ERROR: While executing gem ... (Errno::EEXIST) File exists - /usr/local/bin/ruby How do you edit the environment variables for the gem environment? And for those of you on OS X and using ruby AND gems, what did you use to get yourself up and running? I'm thinking of just nuking everything and starting anew.

    Read the article

  • social networking website database management

    - by Anup Prakash
    This could be very basic type of question for you! But for me it is very important. 1) How these(orkut, facebook or other) website store the images in server? Options: a) Keeping all the images in database by converting into bytecode/binary. b) Making a new folder for each user and saving photographs according to their library name. c) Or something else which i(Anup) didn't guess yet. Please reply me. Sayiing thanx to see my question and any many many thanx for answering my question.

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Converting kernel image from ELF to PE

    - by Frank Miller
    I am using Msys to build a home brew kernel that I wrote under Linux. Linux used ELF for its binary format and Msys uses PE. I have the source setup to allow it to be booted by Grub using the Multiboot spec. At the end of the build, I get some undefined symbols: init.o:init.S:(.text+0x14): undefined reference to `edata' main.o:main.c:(.text+0x121): undefined reference to `_alloca' main.o:main.c:(.text+0x126): undefined reference to `__main' ../../lib\libkern.a(mem.o):mem.c:(.text+0x242): undefined reference to `_end' ../../lib\libkern.a(mem.o):mem.c:(.text+0x323): undefined reference to `_end' These appear to be ELF oriented symbols. If anyone can advise me on how these should be dealt with in the PE world, e.g. if there are equivalents, it would help me out a lot!

    Read the article

  • Gtk+ vs Qt language bindings

    - by Adam Smith
    Put shortly: For those familiar with language bindings in Qt and Gtk+. E.g. python and ruby. Are there any quality or capability difference? More background: I know C++ and Qt very well. Minimal experience with Gtk+. I know C++ is not ideal for language bindings due to the lack of a well defined ABI (application binary interface). I also read that Gtk+ was designed to be bound to other languages. So I wonder how this manifets itself in practice. Are the Gtk+ bindings better maintained or work better in some way than their Qt counterparts? I am presently quite interested in the Go language, and they have started developing Gtk+ bindings. However C++ bindings is far away. It makes me wonder whether learning Gtk+ is worth it.

    Read the article

  • Algorithm for Source Control System?

    - by Michael Stum
    I need to write a simple source control system and wonder what algorithm I would use for file differences? I don't want to look into existing source code due to license concerns. I need to have it licensed under MPL so I can't look at any of the existing systems like CVS or Mercurial as they are all GPL licensed. Just to give some background, I just need some really simple functions - binary files in a folder. no subfolders and every file behaves like it's own repository. No Metadata except for some permissions. Overall really simple stuff, my single concern really is how to store only the differences of a file from revision to revision without wasting too much space but also without being too inefficient (Maybe store a full version every X changes, a bit like Keyframes in Videos?)

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • Rewrite probabilities as boolean algebra

    - by Magsol
    I'm given three binary random variables: X, Y, and Z. I'm also given the following: P(Z | X) P(Z | Y) P(X) P(Y) I'm then supposed to determine whether or not it is possible to find P(Z | Y, X). I've tried rewriting the solution in the form of Bayes' Theorem and have gotten nowhere. Given that these are boolean random variables, is it possible to rewrite the system in terms of boolean algebra? I understand that the conditionals can be mapped to boolean implications (x -> y, or !x + y), but I'm unsure how this would translate in terms of the overall problem I'm trying to solve. (yes, this is a homework problem, but here I'm much more interested in how to formally solve this problem than what the solution is...I also figured this question would be entirely too simple for MathOverflow)

    Read the article

  • read first 1kb of a blob from oracle

    - by Angus
    Hi, I wish to extract just the first 1024 bytes of a stored blob and not the whole file. The reason for this is I want to just extract the metadata from a file as quickly as possible without having to select the whole blob. I understand the following: select dbms_lob.substr(file_blob, 16,1) from file_upload where file_upload_id=504; which returns it as hex. How may I do this so it returns it in binary data without selecting the whole blob? Thanks in advance.

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • Why does my PDF ask for a password after being retrieved from Visual SourceSafe?

    - by Schnapple
    PREFACE: Yes we're moving away from VSS in the next few months. One of my web projects contains, as one of its files, a PDF. The PDF on our QA site is being pulled from VSS. A QA tester recently told me he's being prompted for a password when he tries to open it. VSS says the file I have on disk is different than the one it has, so I updated it, but afterwards it's still being shown as different. So basically VSS is mangling my PDF and the results are so wobbly that Adobe Acrobat Reader is confused and thinks it has a password. I've tried adding it as Auto-Detect and as Binary. Same results. Why does my PDF ask for a password after being retrieved from Visual SourceSafe and how can I prevent it?

    Read the article

  • What is a simple way to combine two Emacs major modes, or to change an existing mode?

    - by Winston C. Yang
    In Emacs, I'm working with a file that is a hybrid of two languages. Question 1: Is there a simple way to write a major mode file that combines two major modes? Details: The language is called "brew" (not the "BREW" of "Binary Runtime Environment for Wireless"). brew is made up of the languages R and Latex, whose modes are R-mode and latex-mode. The R code appears between the tags <% and %. Everything else is Latex. How can I write a brew-mode.el file? (Or is one already available?) One idea, which I got from this posting, is to use Latex mode, and treat the code of the form <% ... % as a comment. Question 2: How do you change the .emacs file or the latex.el file to have Latex mode treat code of the form <% ... % as a comment?

    Read the article

  • Clipboard Debugging

    - by Jake Pearson
    In the olden times of .NET 1.1, I could use the SoapFormatter to find out exactly what was getting serialized when I copied an object into the clipboard. Fast forward to 2010, and I tried to do the same trick. It turns out the SoapFormatter does not support generics. Is there an alternative way to find out exactly what binary objects are serialized into the clipboard? For example lets say I have this class: public class Foo { public List<Goo> Children; } If I send an instance of it to the clipboard, I would like to take a look at what is in the clipboard to see if it's children list was included or not. Update: I was finally able to find the over copied field with the debugger. Visual Studio did it's job.

    Read the article

  • Array.BinarySearch does not find item using IComparable

    - by Sir Psycho
    If a binary search requires an array to be sorted before hand, why does the following code work? string[] strings = new[] { "z", "a", "y", "e", "v", "u" }; int pos = Array.BinarySearch(strings, "Y", StringComparer.OrdinalIgnoreCase); Console.WriteLine(pos); And why does this code result return -1? public class Person : IComparable<Person> { public string Name { get; set; } public int Age { get; set; } public int CompareTo(Person other) { return this.Age.CompareTo(other.Age) + this.Name.CompareTo(other.Name); } } var people = new[] { new Person { Age=5,Name="Tom"}, new Person { Age=1,Name="Tom"}, new Person { Age=2,Name="Tom"}, new Person { Age=1,Name="John"}, new Person { Age=1,Name="Bob"}, }; var s = new Person { Age = 1, Name = "Tom" }; // returns -1 Console.WriteLine( Array.BinarySearch(people, s) );

    Read the article

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • Querying and ordering results of a database in grails using transient fields

    - by Azder
    I'm trying to display paged data out of a grails domain object. For example: I have a domain object Employee with the properties firstName and lastName which are transient, and when invoking their setter/getter methods they encrypt/decrypt the data. The data is saved in the database in encrypted binary format, thus not sortable by those fields. And yet again, not sortable by transient ones either, as noted in: http://www.grails.org/GSP+Tag+-+sortableColumn . So now I'm trying to find a way to use the transients in a way similar to: Employee.withCriteria( max: 10, offset: 30 ){ order 'lastName', 'asc' order 'firstName', 'asc' } The class is: class Employee { byte[] encryptedFirstName byte[] encryptedLastName static transients = [ 'firstName', 'lastName' ] String getFirstName(){ decrypt("encryptedFirstName") } void setFirstName(String item){ encrypt("encryptedFirstName",item) } String getLastName(){ decrypt("encryptedLastName") } void setLastName(String item){ encrypt("encryptedLastName",item) } }

    Read the article

  • How can I have a serializable struct that wraps it's self as an int32 implicitly? in C#?

    - by firoso
    Long story short, I have a struct (see below) that contains exactly one field: private int value; I've also implemented implicit conversion operators: public static implicit operator int(Outlet val) { return val.value; } public static implicit operator Outlet(int val) { return new Outlet(val); } I've implemented all of the following : IComparable, IComparable<Cart>, IComparable<int>, IConvertible, IEquatable<Cart>, IEquatable<int>, IFormattable I'm at a point where I really have no clue why, but whenever I serialize this object, I get no value. For instance, with XmlSerialization: <Outlet /> Also, I'm not solely concerned about XmlSerialization, I'm concerned about ALL serialization (binary for instance) How can I ensure that this serializes properly? NOTE: I did this because mapping an int,int dictionary seemed rather poorly typed to me when explicit objects with validation behavior were desired.

    Read the article

  • SQL: ATER COLUMN to shorter CHAR(n) type

    - by Rising Star
    I'm working with MS SQL SERVER 2003. I want to change a column in one of my tables to have fewer characters in the entries. This is identical to this question: http://stackoverflow.com/questions/2281336/altering-a-table-column-to-accept-more-characters except for the fact that I want fewer characters instead of more. I have a column in one of my tables that holds nine-digit entries. A developer previously working on the table mistakenly set the column to hold ten-digit entries. I need to change the type from CHAR(10) to CHAR(9). Following the instructions from the discussion linked above, I wrote the statement ALTER TABLE [MY_TABLE] ALTER COLUMN [MY_COLUMN] CHAR(9); This returns the error message "String or binary data would be truncated". I see that my nine-digit strings have a space appended to make them ten digits. How do I tell SQL Server to discard the extra space and convert my column to a CHAR(9) type?

    Read the article

  • How to package up a Python 3 script with modules

    - by Brian
    I created a small script that uses a few 3rd-party modules. I'm not sure how to distribute it. I tried Pyinstaller, but that doesn't seem to work. It can't find the modules. When I give the binary to a co-worker, it says it is looking for files in my home directory ( not his ) and dies. I have found that Pyinstaller is not able to find most modules. I am running Python 3 and installed Pyinstaller with pip from Python 2. It did not work trying to use pip from Python 3. When, I give it a path to my modules, it complains that they are python 3 modules. Just looking for some clarification. Ultimately, I'd like to run this on a linux or OS X box where python and my modules probably won't be installed. I just started Python yesterday and have a ton to learn.

    Read the article

  • Why does this crash?

    - by Adam Driscoll
    I've been banging my head...I can't pretend to be a C++ guy... TCHAR * pszUserName = userName.GetBuffer(); SID sid; SecureZeroMemory(&sid, sizeof(sid)); SID_NAME_USE sidNameUse; DWORD cbSid = sizeof(sid); pLog->Log(_T("Getting the SID for user [%s]"), 1, userName); if (!LookupAccountName(NULL, (LPSTR)pszUserName, &sid, &cbSid, NULL, 0, &sidNameUse)) { pLog->Log(_T("Failed to look up user SID. Error code: %d"),1, GetLastError()); return _T(""); } pLog->Log(_T("Converting binary SID to string SID")); The message 'Getting the SID for user [x] is written' but then the app crashes. I'm assuming is was the LookupAccountName call. EDIT: Whoops userName is a MFC CString

    Read the article

  • How to add external library properly in Eclipse?

    - by Peter
    So today I downloaded Apache Commons Lang library (binary, zip format). I extracted it to C:\eclipse\commons-lang-2.5 folder. There are a commons-lang-2.5.jar, a commons-lang-2.5-javadoc.jar, and a commons-lang-2.5-sources.jar inside, and a folder for HTML Javadoc. I started Eclipse, added commons-lang-2.5.jar, and set its source and Javadoc respectively as the screenshot below. My question is, is there a convenient or standard way to add external libraries? Or am I actually doing the right thing?

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >