Search Results

Search found 6448 results on 258 pages for 'pdf reader'.

Page 103/258 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Allow opening a new tab with Ctrl+T on all websites in Firefox

    - by Martin J.H.
    In Firefox, certain websites and plugins (Adobe PDF Plugin) appear to "capture" the Control key, so that when I try to open a new tab using "Ctrl+t", nothing happens - or worse, something unexpected happens. Examples: On the Codecademy site, while editing code, Ctrl+T either does nothing, or (when Flash is disabled) switches the position of the two characters next to the cursor. When viewing PDF's with the Adobe PDF Plugin, Ctrl+T does nothing. Is there a way to disable this "feature"? I would like "Ctrl+t" to always "talk" to Firefox! Edit: After searching superuser deeper, this question is very similar to the questions: "How to prevent keystroke grabbing/hijacking by websites in Firefox?" "How do I prevent pages I visit from overriding selected Firefox shortcut keys?". The answers to these questions are interesting and relevant, but do not give a method on how to disable combinatinos such as "Ctrl+t". Maybe a modified Greasemonkey script is the easiest solultion. Edit 2 - Attempt at a solution The following UserScript (Use GreaseMonkey to install it) successfully captures Ctrl+t on some sites (Google Search site, for instance - PopUp "Gotcha" appears), but not on the Codecademy site. I found another question pertaining to this subject here: "How to forbid keyboard shortcut stealing by websites in Firefox". It was raised in 2010, and the consensus was: It can't be done. // ==UserScript== // @name Disable Ctrl T interceptions // @description Stop websites from highjacking keyboard shortcuts // // @run-at document-start // @include * // @grant none // ==/UserScript== // Keycode for 't'. Add more to disable other ctrl+X interceptions keycodes = [84]; var lastPressedButton = [0]; document.addEventListener('keydown', function(e) { //uncomment to find out the keycode for any given key // alert(e.keyCode ); if (keycodes.indexOf(e.keyCode) != -1 && e.ctrlKey) { e.cancelBubble = true; e.stopImmediatePropagation(); alert("Gotcha!"); } return false; });

    Read the article

  • IIS7 Binary Stream Error

    - by WDuffy
    I'm struggling to understand the problem I'm seeing so please accept my apology if the question is vague. I'm running a classic asp app in IIS7 and everything seems to work fine except for one issue that has me stumped. Basically, files can be downloaded from the server which is done using the sendBinary method of Persits ASP Upload component. This component works fine for uploading etc it's just the downloading I have a problem. The strange thing is, I cannot have a pure asp page that serves the binary file. Everything works fine in II6, but in II7 there is a strange problem. For example this does not work. <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> However, if I put anything in front of the asp code the file is served fine. serve this <% SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> I can also write something to the response stream first and it works fine. <% Response.Write("server this") SET objUpload = server.createObject("Persits.Upload") objUpload.sendBinary "D:\sites\file.pdf", true, "application/octet-binary", true SET objUpload = NOTHING %> Does anyone have any ideas of what could be causing this or has anyone ran into a similar situation? I'm sure it has something to do with the setup in IIS 7.

    Read the article

  • NVIDIA Tesla K20C in Dell PowerEdge R720xd --- power cables

    - by CptSupermrkt
    I am trying to put an NVIDIA Tesla K20C into a Dell PowerEdge R720xd. I'm having a bit of trouble understanding the power requirements of the card. First, here is a picture of two pages of the same manual, which seems contradictory to me. One page says only a single connector is required, while the next page says both are required. The entire manual for the card can be found here: http://www.nvidia.com/content/PDF/kepler/Tesla-K20-Active-BD-06499-001-v02.pdf Here is an photo taken of the power connections on the card: And here is a photo of where those connectors need to go, onto the PCI-E riser of the r720xd: Neither the R720xd NOR the GPU came with the necessary cables. And given what appears to be a contradiction in the GPU manual (above), I'm not even sure at this point what we actually need. I have searched high and low online for things like 2x6 pin PCI-E to 8 pin male-to-male and so on, and for the life of me cannot find what we need. In case anyone needs it, the owner's manual of the R720xd can be found here: ftp://ftp.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-r720xd_Owner%27s%20Manual_en-us.pdf The relevant page is page 68, which clearly indicates that the 8-pin female port on the riser card is for a GPU. The bottom line question: exactly what power cables do we need to buy, and where can we find them?

    Read the article

  • Can not copy files after installing windows

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • Laptop authentication/logon via accelometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • Windows Server 08 R2 file share File locking, OSX clients

    - by Keith Loughnane
    I've spent the last two weeks banging my head against this wall. I think I'm starting to understand the problem though. I manage a design company and they have 5 macs (OSX 10.5/.6/.7) connected over SMB to a Windows 2008 R2 file server, another machine functions as Domain Controller (that might not matter). All the macs can connect ok, no issues finding the server or logging in. For the most part things are ok. The problem is files locking up. I thought it was a permissions issue at first but it seems to be file locking. The users open a file; .ind, .pdf etc the file opens, the software reads it and closes it. That's fine, but the folder above the folder locks, it can't be moved and it can't be renamed. Eg: /Working/Project01/Imagefiles/image.pdf /Finished/ The user opens image.pdf, closes it and wants to move the whole Project01 folder into Finished. It gives a username/pass dialogue and then does nothing, no error, or just does nothing. Trying to rename gives a dialogue that says you don't have permission. It looks like it's looking for permission locally, which is why I spent about a week looking at that. Eventually I found that Finder on the macs seems to be keeping the folders open. I can work around it by Killing finder, remounting the shared drive or closing the file through the server manager but this just proves the theory it's not a solution. Has anyone dealt with this problem?

    Read the article

  • repeated entries in website log file

    - by Reza
    I am writing an ad hoc log analyser for my website log file. The following is part of the log file in which it shows file1.pdf has been downloaded twice. Looking carefully, the time stamp and IP address are exactly the same in both entries. How can it be possible to have 2 downloads at the same time by the same person. Should I count it as 2 in my programme or as 1? Any reply is appreciated. name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 3706 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)" name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 425462 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)"

    Read the article

  • Adding array of images to Firebase using AngularFire

    - by user2833143
    I'm trying to allow users to upload images and then store the images, base64 encoded, in firebase. I'm trying to make my firebase structured as follows: |--Feed |----Feed Item |------username |------epic |---------name,etc. |------images |---------image1, image 2, etc. However, I can't get the remote data in firebase to mirror the local data in the client. When I print the array of images to the console in the client, it shows that the uploaded images have been added to the array of images...but these images never make it to firebase. I've tried doing this multiple ways to no avail. I tried using implicit syncing, explicit syncing, and a mixture of both. I can;t for the life of me figure out why this isn;t working and I'm getting pretty frustrated. Here's my code: $scope.complete = function(epicName){ for (var i = 0; i < $scope.activeEpics.length; i++){ if($scope.activeEpics[i].id === epicName){ var epicToAdd = $scope.activeEpics[i]; } } var epicToAddToFeed = {epic: epicToAdd, username: $scope.currUser.username, userImage: $scope.currUser.image, props:0, images:['empty']}; //connect to feed data var feedUrl = "https://epicly.firebaseio.com/feed"; $scope.feed = angularFireCollection(new Firebase(feedUrl)); //add epic var added = $scope.feed.add(epicToAddToFeed).name(); //connect to added epic in firebase var addedUrl = "https://epicly.firebaseio.com/feed/" + added; var addedRef = new Firebase(addedUrl); angularFire(addedRef, $scope, 'added').then(function(){ // for each image uploaded, add image to the added epic's array of images for (var i = 0, f; f = $scope.files[i]; i++) { var reader = new FileReader(); reader.onload = (function(theFile) { return function(e) { var filePayload = e.target.result; $scope.added.images.push(filePayload); }; })(f); reader.readAsDataURL(f); } }); } EDIT: Figured it out, had to connect to "https://epicly.firebaseio.com/feed/" + added + "/images"

    Read the article

  • C# Can I return HttpWebResponse result to iframe - Uses Digest authentication

    - by chadsxe
    I am trying to figure out a way to display a cross-domain web page that uses Digest Authentication. My initial thought was to make a web request and return the entire page source. I currently have no issues with authenticating and getting a response but I am not sure how to properly return the needed data. // Create a request for the URL. WebRequest request = WebRequest.Create("http://some-url/cgi/image.php?type=live"); // Set the credentials. request.Credentials = new NetworkCredential(username, password); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Get the stream containing content returned by the server. Stream dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Clean up the streams and the response. reader.Close(); dataStream.Close(); response.Close(); return responseFromServer; My problems are currently... responseFromServer is not returning the entire source of the page. I.E. missing body and head tags The data is encoded improperly in responseFromServer. I believe this has something to do with the transfer encoding being of the type chunked. Further more... I am not entirely sure if this is even possible. If it matters, this is being done in ASP.NET MVC 4 C#. Thanks, Chad

    Read the article

  • Difference between DataTable.Load() and DataTable = dataSet.Tables[];

    - by subbu
    Hi All , I have a doubt i use the following piece of code get data from a SQLlite data base and Load it into a data table "SQLiteConnection cnn = new SQLiteConnection("Data Source=" + path); cnn.Open(); SQLiteCommand mycommand = new SQLiteCommand(cnn); string sql = "select Company,Phone,Email,Address,City,State,Zip,Country,Fax,Web from RecordsTable"; mycommand.CommandText = sql; SQLiteDataReader reader = mycommand.ExecuteReader(); dt.Load(reader); reader.Close(); cnn.Close();" In some cases when I try to load it Gives me "Failed to enable constraints exception" But when I tried this below given piece of code for same table and same set of records it worked "SQLiteConnection ObjConnection = new SQLiteConnection("Data Source=" + path); SQLiteCommand ObjCommand = new SQLiteCommand("select Company,Phone,Email,Address,City,State,Zip,Country,Fax,Web from RecordsTable", ObjConnection); ObjCommand.CommandType = CommandType.Text; SQLiteDataAdapter ObjDataAdapter = new SQLiteDataAdapter(ObjCommand); DataSet dataSet = new DataSet(); ObjDataAdapter.Fill(dataSet, "RecordsTable"); dt = dataSet.Tables["RecordsTable"]; " Can any one tell me what is the difference between two

    Read the article

  • IDataRecord.IsDBNull causes an System.OverflowException (Arithmetic Overflow)

    - by Ciddan
    Hi! I have a OdbcDataReader that gets data from a database and returns a set of records. The code that executes the query looks as follows: OdbcDataReader reader = command.ExecuteReader(); while (reader.Read()) { yield return reader.AsMovexProduct(); } The method returns an IEnumerable of a custom type (MovexProduct). The convertion from an IDataRecord to my custom type MovexProduct happens in an extension-method that looks like this (abbrev.): public static MovexProduct AsMovexProduct(this IDataRecord record) { var movexProduct = new MovexProduct { ItemNumber = record.GetString(0).Trim(), Name = record.GetString(1).Trim(), Category = record.GetString(2).Trim(), ItemType = record.GetString(3).Trim() }; if (!record.IsDBNull(4)) movexProduct.Status1 = int.Parse(record.GetString(4).Trim()); // Additional properties with IsDBNull checks follow here. return movexProduct; } As soon as I hit the if (!record.IsDBNull(4)) I get an OverflowException with the exception message "Arithmetic operation resulted in an overflow." StackTrace: System.OverflowException was unhandled by user code Message=Arithmetic operation resulted in an overflow. Source=System.Data StackTrace: at System.Data.Odbc.OdbcDataReader.GetSqlType(Int32 i) at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i) at System.Data.Odbc.OdbcDataReader.IsDBNull(Int32 i) at JulaAil.DataService.Movex.Data.ExtensionMethods.AsMovexProduct(IDataRecord record) [...] I've never encountered this problem before and I cannot figure out why I get it. I have verified that the record exists and that it contains data and that the indexes I provide are correct. I should also mention that I get the same exception if I change the if-statemnt to this: if (record.GetString(4) != null). What does work is encapsulating the property-assignment in a try {} catch (NullReferenceException) {} block - but that can lead to performance-loss (can it not?). I am running the x64 version of Visual Studio and I'm using a 64-bit odbc driver. Has anyone else come across this? Any suggestions as to how I could solve / get around this issue? Many thanks!

    Read the article

  • SQL Quey slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • .NET Fingerprint SDK

    - by Kishore A
    I am looking for a comparison of Fingerprint reader SDKs available for .NET (we are using .Net 3.5). My requirements are 1. Associate more than one fingerprint with a person. 2. Store the fingerprints it self. (So I do not have to change the Database for my program.) 3. Work in both event and no-event mode. (Event Mode: Give notification if someone swipes a finger on the reader; No-Event mode: I activate the reader in synchronous mode). 4. Should provide API for either confirming a user or Identifying a user. (Confirm API: I send the person's ID/unique number and it confirms that it is the same person; Identifying API: The sdk sends the person's ID after it looks up the person using the fingerprint) I would also like to get a comparison of Fingerprint readers if anybody knows of one available on the internet. Hope I was clear. Thanks, Kishore.

    Read the article

  • Trouble converting an MP3 file to a WAV file using Naudio

    - by WebDevHobo
    Naudio Library: http://naudio.codeplex.com/ I'm trying to convert an MP3 file to a WAV file, but I've run in to a small error. I know what's going wrong, but I don't really know how to go about fixing it. Here's the piece of code I'm running: private void button1_Click(object sender, EventArgs e) { using(Mp3FileReader reader = new Mp3FileReader(@"path\to\MP3")) { using(WaveFileWriter writer = new WaveFileWriter(@"C:\test.wav", new WaveFormat())) { int counter = 0; while(reader.Read(test, counter, test.Length + counter) != 0) { writer.WriteData(test, counter, test.Length + counter); counter += 512; } } } } reader.Read() goes into the Mp3FileReader class, and the method looks like this: public override int Read(byte[] sampleBuffer, int offset, int numBytes) { if (numBytes % waveFormat.BlockAlign != 0) //throw new ApplicationException("Must read complete blocks"); numBytes -= (numBytes % waveFormat.BlockAlign); return mp3Stream.Read(sampleBuffer, offset, numBytes); } mp3Stream is an object of the Stream class. The problem is: I'm getting an ArgumentException. MSDN says that this is because the sum of offset and numBytes is greater than the length of sampleBuffer. Documentation: http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx This happens because I increase the counter every time, but the size of the byte array test remains the same. What I've been wondering is: do I need to increase the size of the array dynamically, or do I need to find out the needed size at the beginning and set it right away? And also, instead of 512, the method in Mp3FileReader returns 365 the first time. Which is the size of a whole block. But I'm writing the full 512. I'm basically just using the read to check if I'm not at the end of the file yet. Do I need to catch the return value and do something with that, or am I good here?

    Read the article

  • Lucene stop words not removed during searching need a substitute for AnalyzingQueryParser

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser. Update portion of code that invokes AnalyzingQueryParser AnalyzingQueryParser parser = new AnalyzingQueryParser(Version.LUCENE_CURRENT,"description", analyzer); // fuzzy matching preparation String fuzzyStr = TextQuery.prepareFuzzy(tq.text, fuzzyDist); Query query = parser.parse(fuzzyStr); TopScoreDocCollector collector = TopScoreDocCollector.create(numHits, true); searcher.search(query, collector);

    Read the article

  • How to get a file path from c drive in android

    - by Thomas Mathew
    I was trying to extract the content of a pdf file and display its content in android. I tired the code in java and it worked.But when i am coding it in android, its not displaying anything. I think, its not getting the file path. Is there any way by which i could get a file stored in c drive and store its path in a string? The code is given below: public class hello extends Activity { /** Called when the activity is first created. */ private static String INPUTFILE = "FirstPdf.pdf"; String str; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); try { Document document = new Document(); document.open(); PdfReader reader = new PdfReader(INPUTFILE); int n = reader.getNumberOfPages(); str=PdfTextExtractor.getTextFromPage(reader, 2); } catch (IOException e) { //e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } TextView tv = new TextView(this); tv.setText(str); setContentView(tv); } } I would be very grateful if the problem is solved.

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • C# Find and Replace RegEx with wildcard search and addition of value

    - by fraXis
    Hello, The below code is from my other questions that I have asked here on SO. Everyone has been so helpful and I almost have a grasp with regards to RegEx but I ran into another hurdle. This is what I basically need to do in a nutshell. I need to take this line that is in a text file that I load into my content variable: X17.8Y-1.Z0.1G0H1E1 I need to do a wildcard search for the X value, Y value, Z value, and H value. When I am done, I need this written back to my text file (I know how to create the text file so that is not the problem). X17.8Y-1.G54G0T2 G43Z0.1H1M08 I have code that the kind users here have given me, except I need to create the T value at the end of the first line, and use the value from the H and increment it by 1 for the T value. For example: X17.8Y-1.Z0.1G0H5E1 would translate as: X17.8Y-1.G54G0T6 G43Z0.1H5M08 The T value is 6 because the H value is 5. I have code that does everything (does two RegEx functions and separates the line of code into two new lines and adds some new G values). But I don't know how to add the T value back into the first line and increment it by 1 of the H value. Here is my code: StreamReader reader = new StreamReader(fDialog.FileName.ToString()); string content = reader.ReadToEnd(); reader.Close(); content = Regex.Replace(content, @"X[-\d.]+Y[-\d.]+", "$0G54G0"); content = Regex.Replace(content, @"(Z(?:\d*\.)?\d+)[^H]*G0(H(?:\d*\.)?\d+)\w*", "\nG43$1$2M08"); //This must be created on a new line This code works great at taking: X17.8Y-1.Z0.1G0H5E1 and turning it into: X17.8Y-1.G54G0 G43Z0.1H5M08 but I need it turned into this: X17.8Y-1.G54G0T6 G43Z0.1H5M08 (notice the T value is added to the first line, which is the H value +1 (T = H + 1). Can someone please modify my RegEx statement so I can do this automatically? I tried to combine my two RegEx statements into one line but I failed miserably. Thanks so much, Shawn

    Read the article

  • This property cannot be set after writing has started! on a C# WebRequest Object

    - by EBAGHAKI
    I want to reuse a WebRequest object so that cookies and session would be saved for later request to the server. Below is my code. If i use Post function twice on the second time at request.ContentLength = byteArray.Length; it will throw an exception This property cannot be set after writing has started! But as you can see dataStream.Close(); Should close the writing process! Anybody knows what's going on? static WebRequest request; public MainForm() { request = WebRequest.Create("http://localhost/admin/admin.php"); } static string Post(string url, string data) { request.Method = "POST"; byte[] byteArray = Encoding.UTF8.GetBytes(data); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = byteArray.Length; Stream dataStream = request.GetRequestStream(); dataStream.Write(byteArray, 0, byteArray.Length); dataStream.Close(); WebResponse response = request.GetResponse(); Console.WriteLine(((HttpWebResponse)response).StatusDescription); dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); Console.WriteLine(responseFromServer); reader.Close(); dataStream.Close(); response.Close(); request.Abort(); return responseFromServer; }

    Read the article

  • Lucene stop words not removed during searching

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. As I post this problem, I found the bug. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser.

    Read the article

  • SQL Query slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • Find and Replace RegEx with wildcard search and addition of value

    - by fraXis
    The below code is from my other questions that I have asked here on SO. Everyone has been so helpful and I almost have a grasp with regards to RegEx but I ran into another hurdle. This is what I basically need to do in a nutshell. I need to take this line that is in a text file that I load into my content variable: X17.8Y-1.Z0.1G0H1E1 I need to do a wildcard search for the X value, Y value, Z value, and H value. When I am done, I need this written back to my text file (I know how to create the text file so that is not the problem). X17.8Y-1.G54G0T2 G43Z0.1H1M08 I have code that the kind users here have given me, except I need to create the T value at the end of the first line, and use the value from the H and increment it by 1 for the T value. For example: X17.8Y-1.Z0.1G0H5E1 would translate as: X17.8Y-1.G54G0T6 G43Z0.1H5M08 The T value is 6 because the H value is 5. I have code that does everything (does two RegEx functions and separates the line of code into two new lines and adds some new G values). But I don't know how to add the T value back into the first line and increment it by 1 of the H value. Here is my code: StreamReader reader = new StreamReader(fDialog.FileName.ToString()); string content = reader.ReadToEnd(); reader.Close(); content = Regex.Replace(content, @"X[-\d.]+Y[-\d.]+", "$0G54G0"); content = Regex.Replace(content, @"(Z(?:\d*\.)?\d+)[^H]*G0(H(?:\d*\.)?\d+)\w*", "\nG43$1$2M08"); //This must be created on a new line This code works great at taking: X17.8Y-1.Z0.1G0H5E1 and turning it into: X17.8Y-1.G54G0 G43Z0.1H5M08 but I need it turned into this: X17.8Y-1.G54G0T6 G43Z0.1H5M08 (notice the T value is added to the first line, which is the H value +1 (T = H + 1). Can someone please modify my RegEx statement so I can do this automatically? I tried to combine my two RegEx statements into one line but I failed miserably.

    Read the article

  • ftp .net getdirectory size

    - by Xaver
    hi i write method which must to know that is size of specified directory i get response from server which contains flags of file name size and other info and on the different ftp servers format of answer is different how to know format of answer? unsigned long long GetFtpDirSize(String^ ftpDir) { unsigned long long size = 0; int j = 0; StringBuilder^ result = gcnew StringBuilder(); StreamReader^ reader; FtpWebRequest^ reqFTP; reqFTP = (FtpWebRequest^)FtpWebRequest::Create(gcnew Uri(ftpDir)); reqFTP->UseBinary = true; reqFTP->Credentials = gcnew NetworkCredential("anonymous", "123"); reqFTP->Method = WebRequestMethods::Ftp::ListDirectoryDetails; reqFTP->KeepAlive = false; reqFTP->UsePassive = false; try { WebResponse^ resp = reqFTP->GetResponse(); Encoding^ code; code = Encoding::GetEncoding(1251); reader = gcnew StreamReader(resp->GetResponseStream(), code); String^ line = reader->ReadToEnd(); array<Char>^delimiters = gcnew array<Char>{ '\r', '\n' }; array<Char>^delimiters2 = gcnew array<Char>{ ' ' }; array<String^>^words = line->Split(delimiters, StringSplitOptions::RemoveEmptyEntries); array<String^>^DetPr; System::Collections::IEnumerator^ myEnum = words->GetEnumerator(); while ( myEnum->MoveNext() ) { String^ word = safe_cast<String^>(myEnum->Current); DetPr = word->Split(delimiters2); } }

    Read the article

  • Python and csv help

    - by user353064
    I'm trying to create this script that will check the computer host name then search a master list for the value to return a corresponding value in the csv file. Then open another file and do a find an replace. I know this should be easy but haven't done so much in python before. Here is what I have so far... masterlist.txt (tab delimited) Name UID Bob-Smith.local bobs Carmen-Jackson.local carmenj David-Kathman.local davidk Jenn-Roberts.local jennr Here is the script that I have created thus far #GET CLIENT HOST NAME import socket host = socket.gethostname() print host #IMPORT MASTER DATA import csv, sys filename = "masterlist.txt" reader = csv.reader(open(filename, "rU")) #PRINT MASTER DATA for row in reader: print row #SEARCH ON HOSTNAME AND RETURN UID #REPLACE VALUE IN FILE WITH UID #import fileinput #for line in fileinput.FileInput("filetoreplace",inplace=1): # line = line.replace("replacethistext","UID") # print line Right now, it's just set to print the master list. I'm not sure if the list needs to be parsed and placed into a dictionary or what. I really need to figure out how to search the first field for the hostname and then return the field in the second column. Thanks in advance for your help, Aaron

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >