Search Results

Search found 14506 results on 581 pages for 'document scanner'.

Page 14/581 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • Scanner class is skipping lines

    - by user2403304
    I'm new to programing and I'm having a problem with my scanner class. This code is in a loop and when the loop comes around the second, third whatever time I have it set to it skips the first title input. I need help please why is it skipping my title scanner input in the beginning? System.out.println("Title:"); list[i].title=keyboard.nextLine(); System.out.println("Author:"); list[i].author=keyboard.nextLine(); System.out.println("Album:"); list[i].album=keyboard.nextLine(); System.out.println("Filename:"); list[i].filename=keyboard.nextLine();

    Read the article

  • TCP Scanner Python MultiThreaded

    - by user1473508
    I'm trying to build a small tcp scanner for a netmask. The code is as follow: import socket,sys,re,struct from socket import * host = sys.argv[1] def RunScanner(host): s = socket(AF_INET, SOCK_STREAM) s.connect((host,80)) s.settimeout(0.1) String = "GET / HTTP/1.0" s.send(String) data = s.recv(1024) if data: print "host: %s have port 80 open"%(host) Slash = re.search("/", str(host)) if Slash : netR,_,Wholemask = host.partition('/') Wholemask = int(Wholemask) netR = struct.unpack("!L",inet_aton(netR))[0] for host in (inet_ntoa(struct.pack("!L", netR+n)) for n in range(0, 1<<32-Wholemask)): try: print "Doing host",host RunScanner(host) except: pass else: RunScanner(host) To launch : python script.py 10.50.23.0/24 The problem I'm having is that even with a ridiculous low settimeout value set, it takes ages to cover the 255 ip addresses since most of them are not assigned to a machine. How can i make a way faster scanner that wont get stuck if the port is close.MultiThreading ? Thanks !

    Read the article

  • Oracle IRM video demonstration of seperating duties of document security

    - by Simon Thorpe
    One thing an Information Rights Management technology should do well is separate out three main areas of responsibility.The business process of defining and controlling the classifications to which content is secured and the definition of the roles employees, customers, partners and contractors have when accessing secured content. Allow IT to manage the server and perform the role of authorizing the creation of new classifications to meet business needs but yet once the classification has been created and handed off to the business, IT no longer plays a role on the ongoing management. Empower the business to take ownership of classifications to which their own content is secured. For example an employee who is leading an acquisition project should be responsible for defining who has access to confidential project documents. This person should be able to manage the rights users have in the classification and also be the point of contact for those wishing to gain rights. Oracle IRM has since it's creation in the late 1990's had this core model at the heart of its design. Due in part to the important seperation of rights from the documents themselves, Oracle IRM places the right functionality within the right parts of the business. For example some IRM technologies allow the end user to make decisions about what users can print, edit or save a secured document. This in practice results in a wide variety of content secured with a plethora of options that don't conform to any policy. With Oracle IRM users choose from a list of classifications to which they have been given the ability to secure information against. Their role in the classification was given to them by the business owner of the classification, yet the definition of the role resides within the realm of corporate security who own the overall business classification policies. It is this type of design and philosophy in Oracle IRM that makes it an enterprise solution that works beyond a few users and a few secured documents to hundreds of thousands of users and millions of documents. This following video shows how Oracle IRM 11g, the market leading document security solution, lets the security organization manage and create classifications whilst the business owns and manages them. If you want to experience using Oracle IRM secured content and the effects of different roles users have, why not sign up for our free demonstration.

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • How to remove check digit from a barcode

    - by Teddy
    Hi. I have barcodes generated with Code 39 symbology. Some of the barcodes contain check digit, and some does not. So, how can I prevent barcode scanner to pass the check digit to the pc? Or is there another way I can check if the barcode has a check digit in .Net? For example, the encoded data is 1642, but the the barcode scanner reads 16425 and I might have data a barcode with encoded data 16425. I need to distinguish both cases either in the code, or in the barcode scanner. Thanks.

    Read the article

  • filling array gradually with data from user

    - by neville
    I'm trying to fill an array with words inputted by user. Each word must be one letter longer than previous and one letter shorter than next one. Their length is equal to table row index, counting from 2. Words will finally create a one sided pyramid, like : A AB ABC ABCD Scanner sc = new Scanner(System.in); System.out.println("Give the height of array: "); String[] words = new String[height]; for(int i=2; i<height+2; i++){ System.out.println("Give word with "+i+" letters."); words[i-2] = sc.next(); while( words[i-2].length()>i-2 || words[i-2].length()<words[i-3].length() ){ words[i-2] = sc.next(); } } Currently the while loop doesn't influence scanner at all :/

    Read the article

  • Optimizing a lot of Scanner.findWithinHorizon(pattern, 0) calls

    - by darvids0n
    I'm building a process which extracts data from 6 csv-style files and two poorly laid out .txt reports and builds output CSVs, and I'm fully aware that there's going to be some overhead searching through all that whitespace thousands of times, but I never anticipated converting about about 50,000 records would take 12 hours. Excerpt of my manual matching code (I know it's horrible that I use lists of tokens like that, but it was the best thing I could think of): public static String lookup(List<String> tokensBefore, List<String> tokensAfter) { String result = null; while(_match(tokensBefore)) { // block until all input is read if(id.hasNext()) { result = id.next(); // capture the next token that matches if(_matchImmediate(tokensAfter)) // try to match tokensAfter to this result return result; } else return null; // end of file; no match } return null; // no matches } private static boolean _match(List<String> tokens) { return _match(tokens, true); } private static boolean _match(List<String> tokens, boolean block) { if(tokens != null && !tokens.isEmpty()) { if(id.findWithinHorizon(tokens.get(0), 0) == null) return false; for(int i = 1; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(id.hasNext() && !id.next().matches(tokens.get(i))) { break; // break to blocking behaviour } } } else { return true; // empty list always matches } if(block) return _match(tokens); // loop until we find something or nothing else return false; // return after just one attempted match } private static boolean _matchImmediate(List<String> tokens) { if(tokens != null) { for(int i = 0; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(!id.hasNext() || !id.next().matches(tokens.get(i))) { return false; // doesn't match, or end of file } } return false; // we have some serious problems if this ever gets called } else { return true; // empty list always matches } } Basically wondering how I would work in an efficient string search (Boyer-Moore or similar). My Scanner id is scanning a java.util.String, figured buffering it to memory would reduce I/O since the search here is being performed thousands of times on a relatively small file. The performance increase compared to scanning a BufferedReader(FileReader(File)) was probably less than 1%, the process still looks to be taking a LONG time. I've also traced execution and the slowness of my overall conversion process is definitely between the first and last like of the lookup method. In fact, so much so that I ran a shortcut process to count the number of occurrences of various identifiers in the .csv-style files (I use 2 lookup methods, this is just one of them) and the process completed indexing approx 4 different identifiers for 50,000 records in less than a minute. Compared to 12 hours, that's instant. Some notes (updated): I don't necessarily need the pattern-matching behaviour, I only get the first field of a line of text so I need to match line breaks or use Scanner.nextLine(). All ID numbers I need start at position 0 of a line and run through til the first block of whitespace, after which is the name of the corresponding object. I would ideally want to return a String, not an int locating the line number or start position of the result, but if it's faster then it will still work just fine. If an int is being returned, however, then I would now have to seek to that line again just to get the ID; storing the ID of every line that is searched sounds like a way around that. Anything to help me out, even if it saves 1ms per search, will help, so all input is appreciated. Thankyou! Usage scenario 1: I have a list of objects in file A, who in the old-style system have an id number which is not in file A. It is, however, POSSIBLY in another csv-style file (file B) or possibly still in a .txt report (file C) which each also contain a bunch of other information which is not useful here, and so file B needs to be searched through for the object's full name (1 token since it would reside within the second column of any given line), and then the first column should be the ID number. If that doesn't work, we then have to split the search token by whitespace into separate tokens before doing a search of file C for those tokens as well. Generalised code: String field; for (/* each record in file A */) { /* construct the rest of this object from file A info */ // now to find the ID, if we can List<String> objectName = new ArrayList<String>(1); objectName.add(Pattern.quote(thisObject.fullName)); field = lookup(objectSearchToken, objectName); // search file B if(field == null) // not found in file B { lookupReset(false); // initialise scanner to check file C objectName.clear(); // not using the full name String[] tokens = thisObject.fullName.split(id.delimiter().pattern()); for(String s : tokens) objectName.add(Pattern.quote(s)); field = lookup(objectSearchToken, objectName); // search file C lookupReset(true); // back to file B } else { /* found it, file B specific processing here */ } if(field != null) // found it in B or C thisObject.ID = field; } The objectName tokens are all uppercase words with possible hyphens or apostrophes in them, separated by spaces. Much like a person's name. As per a comment, I will pre-compile the regex for my objectSearchToken, which is just [\r\n]+. What's ending up happening in file C is, every single line is being checked, even the 95% of lines which don't contain an ID number and object name at the start. Would it be quicker to use ^[\r\n]+.*(objectname) instead of two separate regexes? It may reduce the number of _match executions. The more general case of that would be, concatenate all tokensBefore with all tokensAfter, and put a .* in the middle. It would need to be matching backwards through the file though, otherwise it would match the correct line but with a huge .* block in the middle with lots of lines. The above situation could be resolved if I could get java.util.Scanner to return the token previous to the current one after a call to findWithinHorizon. I have another usage scenario. Will put it up asap.

    Read the article

  • Save Word document to clipboard

    - by uwe
    I often come into the following situation: Get an email with attachment in MS Outlook Open that attachment in MS Word Starting to edit the document in MS Word Start replying to the email in MS Outlook Getting the edited document into my reply I have to save that file to disk and then drag it as attachment. I would think of a short way to get that document as attachement in the newly generated email. Is there a way to implement a save location as clipboard or just a copy to clipboard (the document, not the content)?

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • Enhance Primavera Project Document Collaboration with AutoVue Enterprise Visualization

    Completing projects on time and within budget requires effective project planning, management and collaboration amongst a variety of stakeholders. By introducing Oracle’s AutoVue document visualization and collaboration solutions in Primavera , users can visualize and collaborate on engineering and project documents. Tune into this conversation with Guy Barlow, Industry Strategist for Primavera and Thierry Bonfante, Director Product Strategy for Oracle’s AutoVue solutions to learn how the combination of AutoVue and Primavera accelerates project delivery by providing the right documents to the right resources at the right time to increase team response rates, and provide all critical information for improved decision making.

    Read the article

  • Atom feed validator keeps showing Self reference doesn't match document location

    - by Dino
    I am creating an atom feed, but when I validate it I keep getting: Self reference doesn't match document location and the specific line that is causing the error is: <link rel="self" type="application/atom+xml" href="http://www.example.com/test.rss"/> Please can anyone advise what the error is? Ps. I noticed an up arrow just at the end of that line. (presumably something to do with that bbut not sure)

    Read the article

  • How To show document directory save image in thumbnail in cocos2d class

    - by Anil gupta
    I have just implemented multiple photo selection from iphone photo library and i am saving all selected photo in document directory every time as a array, now i want to show all saved images in my class from document directory as a thumbnail, i have tried some logic but my game getting crashing, My code is below. Any help will be appreciate. Thanks in advance. -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { CCSprite *photoalbumbg = [CCSprite spriteWithFile:@"photoalbumbg.png"]; photoalbumbg.anchorPoint = ccp(0,0); [self addChild:photoalbumbg z:0]; //Background Sound // [[SimpleAudioEngine sharedEngine]playBackgroundMusic:@"Background Music.wav" loop:YES]; CCSprite *photoalbumframe = [CCSprite spriteWithFile:@"photoalbumframe.png"]; photoalbumframe.position = ccp(160,240); [self addChild:photoalbumframe z:2]; CCSprite *frame = [CCSprite spriteWithFile:@"Photo-Frames.png"]; frame.position = ccp(160,270); [self addChild:frame z:1]; /*_____________________________________________________________________________________*/ CCMenuItemImage * upgradebtn = [CCMenuItemImage itemFromNormalImage:@"AlbumUpgrade.png" selectedImage:@"AlbumUpgrade.png" target:self selector:@selector(Upgrade:)]; CCMenu * fMenu = [CCMenu menuWithItems:upgradebtn,nil]; fMenu.position = ccp(200,110); [self addChild:fMenu z:3]; NSError *error; NSFileManager *fM = [NSFileManager defaultManager]; NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Documents directory: %@", [fM contentsOfDirectoryAtPath:documentsDirectory error:&error]); NSArray *allfiles = [fM contentsOfDirectoryAtPath :documentsDirectory error:&error]; directoryList = [[NSMutableArray alloc] init]; for(NSString *file in allfiles) { NSString *path = [documentsDirectory stringByAppendingPathComponent:file]; [directoryList addObject:file]; } NSLog(@"array file name value ==== %@", directoryList); CCSprite *temp = [CCSprite spriteWithFile:[directoryList objectAtIndex:0]]; [temp setTextureRect:CGRectMake(160.0f, 240.0f, 50,50)]; // temp.anchorPoint = ccp(0,0); [self addChild:temp z:10]; for(UIImage *file in directoryList) { // NSData *pngData = [NSData dataWithContentsOfFile:file]; // image = [UIImage imageWithData:pngData]; NSLog(@"uiimage = %@",image); // UIImage *image = [UIImage imageWithContentsOfFile:file]; for (int i=1; i<=3; i++) { for (int j=1;j<=3; j++) { CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:file] autorelease]; CCSprite *selectedimage = [CCSprite spriteWithTexture:tex rect:CGRectMake(0, 0, 67, 66)]; selectedimage.position = ccp(100*i,350*j); [self addChild:selectedimage]; } } } } return self; }

    Read the article

  • Is Google a reliable document search engine?

    - by Miriam Schwab
    I have a site with PDFs and Word documents that I know have been indexed by Google because they appear in search results with filetype:pdf (or doc), and if I search for some very specific terms with quotation marks, they appear as well. But they don't appear for general search terms that do exist in the documents. Is Google a reliable document search engine? If not, are there other options for managing many documents and making them searchable to users?

    Read the article

  • The importance of Document Freedom Day explained by Microsoft

    <b>Stop:</b> "These are only a few of the many resources that you can use to understand how important DFD is for you, even if, personally, you don't care at all about computers. The rest of this page, instead, explains how even a job offer from one of the greatest enemies of Document Freedom, Microsoft, proves the same point."

    Read the article

  • Dynamic Document Template

    - by ell
    I would like to create a "C++ Class" document template, I know you can make static ones by putting them into ~/Templates, but I would like to be able for the content to change according to the file name on creation, for example, using a template like such (pseudocode): #ifndef $(filename)_HPP_INCLUDED #define $(filename)_HPP_INCLUDED class $(filename) { public: } #endif $(filename)_HPP_INCLUDED Is this possible? If so, how can I do it? Thanks in advance, ell.

    Read the article

  • Vattenfall Accelerates Projects and Cuts Costs with AutoVue Document Visualization

    Ringhals, a Swedish nuclear power plant, part of the Vattenfall Group, produces 20 percent of the country's electricity and is the largest power station in the Nordic region. Ringhals has standardized on AutoVue for most of their engineering and asset document visualization requirements throughout their plant maintenance, design and engineering operations. As a result, they have cut IT maintenance costs, increased productivity, and improved maintenance operations.

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • Java's Scanner class: using left- and right buttons with Bash

    - by Bart K.
    I'm not too familiar with Linux/Bash, so I can't really find the right terms to search for. Take the snippet: public class Main { public static void main(String[] args) { java.util.Scanner keyboard = new java.util.Scanner(System.in); while(true) { System.out.print("$ "); String in = keyboard.nextLine(); if(in.equals("q")) break; System.out.println(" "+in); } } } If I run it on my Linux box using Bash, I can't use any of the arrow buttons (I'm only interested in the left- and right button, btw). For example, if I type "test" and then try to go back by pressing the left button, ^[[D appears instead of my cursor going back one place: $ test^[[D I've tried the newer Console class as well, but the end result is the same. On Windows' cmd.exe shell, I don't have this problem. So, the question is: is there a way to change my Java code so that I can use the arrow keys without Bash transforming them in sequences like ^[[D but actually move the cursor instead? I'm hoping that I can solve this on a "programming level". If this is not possible, then I guess I'd better try my luck on Superuser to see if there's something I need to change on my Bash console. Thanks in advance.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >