Search Results

Search found 5055 results on 203 pages for 'pdf converter'.

Page 188/203 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Python program to search for specific strings in hash values (coding help)

    - by Diego
    Trying to write a code that searches hash values for specific string's (input by user) and returns the hash if searchquery is present in that line. Doing this to kind of just learn python a bit more, but it could be a real world application used by an HR department to search a .csv resume database for specific words in each resume. I'd like this program to look through a .csv file that has three entries per line (id#;applicant name;resume text) I set it up so that it creates a hash, then created a string for the resume text hash entry, and am trying to use the .find() function to return the entire hash for each instance. What i'd like is if the word "gpa" is used as a search query and it is found in s['resumetext'] for three applicants(rows in .csv file), it prints the id, name, and resume for every row that has it.(All three applicants) As it is right now, my program prints the first row in the .csv file(print resume['id'], resume['name'], resume['resumetext']) no matter what the searchquery is, whether it's in the resumetext or not. lastly, are there better ways to doing this, by searching word documents, pdf's and .txt files in a folder for specific words using python (i've just started reading about the re module and am wondering if this may be the route, rather than putting everything in a .csv file.) def find_details(id2find): resumes_f=open("resume_data.csv") for each_line in resumes_f: s={} (s['id'], s['name'], s['resumetext']) = each_line.split(";") resumetext = str(s['resumetext']) if resumetext.find(id2find): return(s) else: print "No data matches your search query. Please try again" searchquery = raw_input("please enter your search term") resume = find_details(searchquery) if resume: print resume['id'], resume['name'], resume['resumetext']

    Read the article

  • Embedded CSS Media Queries Not Working

    - by Greg
    I am new to CSS media queries, and I was first trying to get pdf/mp3/mp4 buttons to get centered on this page whenever a mobile device is using it (http://www.mannachurch.org/portfolio-type/recycled-junk/). Keep in mind for that I am using a highly modified wordpress theme. So I tried experimenting to isolate this issue. However, I don't seem to have any control over using media queries and I can't even perform anything even on this simple HTML file: <!DOCTYPE html> <html> <head> <title>Title of the document</title> <style type="text/css"> body{background-color: blue;} @media only screen and (min-device-width : 599px) and (max-device-width : 600px) { body {background-color:black; } } </style> </head> <body> <p>This is an experiment<p/> </body> </html> What am I doing wrong?

    Read the article

  • Disabling browser print options (headers, footers, margins) from page?

    - by Anthony
    I have seen this question asked in a couple of different ways on SO and several other websites, but most of them are either too specific or out-of-date. I'm hoping someone can provide a definitive answer here without pandering to speculation. Is there a way, either with CSS or javascript, to change the default printer settings when someone prints within their browser? And of course by "prints from their browser" I mean some form of HTML, not PDF or some other plug-in reliant mime-type. Please note: If some browsers offer this and others don't (or if you only know how to do it for some browsers) I welcome browser-specific solutions. Similarly, if you know of a mainstream browser that has specific restrictions against EVER doing this, that is also helpful, but some fairly up-to-date documentation would be appreciated. (simply saying "that goes against XYZ's security policy" isn't very convincing when XYZ has made significant changes in said policy in the last three years). Finally, when I say "change default print settings" I don't mean forever, just for my page, and I am referring specifically to print margins, headers, and footers. I am very aware that CSS offers the option of changing the page orientation as well as the page margins. One of the many struggles is with Firefox. If I set the page margins to 1 inch, it ADDS this to the half inch it already puts into place. I very much want to reduce the usage of PDFs on my client's site, but the infringement on presentation (as well as the lack of reliability) are their main concern.

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • TableTools only exporting single header row

    - by Rakeesh
    I am using the DataTable jquery plugin from: http://datatables.net/ I have a DataTable that has multi-row table header with colspan. Something like: <thead> <tr> <th>Level 1</th> <th colspan='2'>Level 1 - Item 1</th> <th colspan='2'>Level 1 - Item 2</th> </tr> <tr> <th>Level 2</th> <th>Level 2 - Item 1a</th> <th>Level 2 - Item 1b</th> <th>Level 2 - Item 2a</th> <th>Level 2 - Item 2b</th> </tr> </thead> However when I use the TableTools plugin to export then except for the "Print" option all the rest (Excel, CSV, Pdf etc) only has the "Level 2" header row and not the Level 1. Any suggestions on how to get it to export also Level 1?

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • how to add data in Table of Jasper Report using Java

    - by Areeb Gillani
    i am here to ask you just a simple question that i am trying to pass data to a jasper report using java but i dont know how to to, because the table data is very dynamic thats y cannot pass sql query. any idea for this. i have a 2D array of object type, where i have all the data... so how can i pass that... Thanx in advance...!:) ConnectionManager con = new ConnectionManager(); con.establishConnection(); String fileName = "Pmc_Bill.jrxml"; String outFileName = "OutputReport.pdf"; HashMap params = new HashMap(); params.put("PName", pname); params.put("PSerial", psrl); params.put("PGender",pgen); params.put("PPhone",pph); params.put("PAge",page); params.put("PRefer",pref); params.put("PDateR",dateNow); try { JasperReport jasperReport = JasperCompileManager.compileReport(fileName); if(jasperReport != null ) System.out.println("so far so good "); // Fill the report using an empty data source JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, params, new JRTableModelDataSource(tbl.getModel()));//con.connection); try{ JasperExportManager.exportReportToPdfFile(jasperPrint, outFileName); System.out.printf("File exported sucessfully"); }catch(Exception e){ e.printStackTrace(); } JasperViewer.viewReport(jasperPrint); } catch (JRException e) { JOptionPane.showMessageDialog(null, e); e.printStackTrace(); System.exit(1); }

    Read the article

  • Python programming - Windows focus and program process

    - by Zack
    I'm working on a python program that will automatically combine sets of files based on their names. Being a newbie, I wasn't quite sure how to go about it, so I decided to just brute force it with the win32api. So I'm attempting to do everything with virtual keys. So I run the script, it selects the top file (after arranging the by name), then sends a right click command,selects 'combine as adobe PDF', and then have it push enter. This launched the Acrobat combine window, where I send another 'enter' command. The here's where I hit the problem. The folder where I'm converting these things loses focus and I'm unsure how to get it back. Sending alt+tab commands seems somewhat unreliable. It sometimes switches to the wrong thing. A much bigger issue for me.. Different combination of files take different times to combine. though I haven't gotten this far in my code, my plan was to set some arbitrarily long time.sleep() command before it finally sent the last "enter" command to finish and confirm the file name completing the combination process. Is there a way to monitor another programs progress? Is there a way to have python not execute anymore code until something else has finished?

    Read the article

  • Help me find article on Multi-threading and Event Handling in Java

    - by JDR
    I once read an article on how to properly write event handlers for multi-threading in Java, but I can't for the life of me find it anymore. It described the pitfalls and potentials for deadlocks that can occur when firing events (not Swing events mind you, but general events like model update notifications). To clarify, the situation would be as such: // let's say this is code from an MVC model somewhere public void setSomeProperty(String myProperty){ if(!this.myProperty.equals(myProperty)){ this.myProperty = myProperty; fireMyPropertyChangedEvent(...); } } The article described how passing control to arbitrary external listener code was a potential cause for deadlock. I now find myself in a situation where I need to fire such events in a multithreaded environment and I would very much like to read the article again to see what it has to say before I continue. Does anyone know the article I'm referring to? I believe it came as a (fairly short) PDF. It started off with an initial naive implementation and incrementally pointed out flaws and improved upon it. It ended with a sort of final proper-way-to-fire-multithreaded-events. I've searched endlessly in my browse history and on google, but all I could find were endless amounts topics on Swing event dispatch threads. Thank you.

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • How to style "form" field labels in Windows Phone 7?

    - by Jeremy Bell
    Is there any standards guidance on how to style field labels next to form fields in windows phone 7 silverlight applications? For example, let's say I have a StackPanel with the TextBlock label and a TextBox for data entry. Currently I am using the default TextBlock Margin included in the PhoneTextSubtleStyle ("12,0,12,0"), and using a Margin of "0,-12,0,0" to push the TextBox up closer to the label: <StackPanel HorizontalAlignment="Left"> <TextBlock VerticalAlignment="Center" Text="Name" Style="{StaticResource PhoneTextSubtleStyle}" /> <TextBox Text="{Binding ItemName, Mode=TwoWay}" TextChanged="TextBox_TextChanged" VerticalAlignment="Center" Width="433" Margin="0,-12,0,0" /> </StackPanel> Note that the TextBox seems to have some internal padding of 12 pixels to the left and right, so that the TextBlock label and the TextBox control visually line up perfectly on the left. The problem is, I see existing apps with widely varying conventions for field label styling. Some do not do the negative margin adjustment, like I have above. Some don't. Some appear to override the label TextBlock Margin so that it is indented an additional 12 pixels on the left (i.e. "24,0,12,0" instead of the default "12,0,12,0"). Some apps put the labels to the left of the fields themselves (I hate that). Is there some standard design guidance on field labels in Windows Phone 7? I read through the design template PDF and could only determine that the field labels should be upper case on the first word (preferably only one word labels), and should NOT have a colon at the end. I didn't see anything with regards to margins or alignment between the label and the field.

    Read the article

  • rails + paperclip: Is a generic "Attachment" model a good idea?

    - by egarcia
    On my application I've several things with attachments on them, using paperclip. Clients have one logo. Stores can have one or more pictures. These pictures, in addition, can have other information such as the date in which they were taken. Products can have one or more pictures of them, categorized (from the font, from the back, etc). For now, each one of my Models has its own "paperclip-fields" (Client has_attached_file) or has_many models that have attached files (Store has_many StorePictures, Product has_many ProductPictures) My client has also told me that in the future we might be adding more attachments to the system (i.e. pdf documents for the clients to download). My application has a rather complex authorization system implemented with declarative_authorization. One can not, for example, download pictures from a product he's not allowed to 'see'. I'm considering re-factoring my code so I can have a generic "Attachment" model. So any model can has_many :attachments. With this context, does it sound like a good idea? Or should I continue making Foos and FooPictures?

    Read the article

  • PDFBox Pagebreak strange Nullpointer exception

    - by schneiti
    I currently try printing out text on multiple pages. For this, I count the number of rows and when they reach a fixed amount a method called pagebreakis executed. After the first pagebreak, when I try setting a font using contentstream.setFont(PDType1Font.HELVETICA, 12); it yields the following errormessage occuring at the described setFont-row. java.lang.NullPointerException at org.apache.pdfbox.pdmodel.edit.PDPageContentStream.setFont(PDPageContentStream.java:321) at com.xy.deu.xy.abc.db.schemavergleich.PDFDocumenter.drawBGCS(PDFDocumenter.java:781) at com.xy.deu.xy.abc.db.schemavergleich.PDFDocumenter.createPDFDocumentation(PDFDocumenter.java:205) at com.xy.deu.xy.abc.db.schemavergleich.MainClass.createPDFandOutput(MainClass.java:361) at com.xy.deu.xy.abc.db.schemavergleich.MainClass.start(MainClass.java:231) at com.xy.deu.xy.abc.db.schemavergleich.MainClass.main(MainClass.java:180) Below is the code that gets executed as the error occurs. ... // If Table gets to long for a page -> pagebreak: if(currentLines > 37) { pageBreak(currentLinePos); // TODO Currently causing app to crash } ... private void pageBreak(int currentLine) throws Exception { contentStream.endText(); contentStream.close(); // Create new page page = new PDPage(PDPage.PAGE_SIZE_A4); doc.addPage( page ); // Create a new font object selecting one of the PDF base fonts font = PDType1Font.HELVETICA; // Start a new content stream which will "hold" the to be created content contentStream = new PDPageContentStream(doc, page); currentLines = 0; mediabox = page.findMediaBox(); contentStream.beginText(); contentStream.moveTextPositionByAmount(startX, startY); contentStream.setFont(PDType1Font.HELVETICA, 12); } Now comes the strange thing: Debugging yields into nothing that is not set. I'll attach a screenshot for you right at the position where the error occurs:

    Read the article

  • Need advice - Developing a flexible documentation system, heavily focused on localization

    - by inkedmn
    I've been charged with building a documentation system/platform. Here's a short list of the major requirements: Easily localized : This will need to support a dozen or so languages out of the gate. (the ability for non-technical personnel to add/update translations would be a big plus, though not 100% required) Flexibility in output formats : At the bare minimum, I need to output the documents (either as a whole or in selected chunks) as PDF and HTML. Bonus points for native formats like Windows Help Files. Managed and deployed via an intuitive user interface (web, ideally). I'm wondering if you folks know of any systems out there that support this type of thing already? I'm not averse to writing this from scratch, but I'd rather not reinvent the wheel if I can help it. The two major candidates I've come across thus far are DocBook and reST. The former seems to have garnered a reputation for, well, sucking. I'm unfamiliar with either, but I'm told that reST would get me a good portion of the way there. Any other suggestions? Would I be better off building this from scratch?

    Read the article

  • Template access of symbol in unnamed namespace

    - by Fred Larson
    We are upgrading our XL C/C++ compiler from V8.0 to V10.1 and found some code that is now giving us an error, even though it compiled under V8.0. Here's a minimal example: test.h: #include <iostream> #include <string> template <class T> void f() { std::cout << TEST << std::endl; } test.cpp: #include <string> #include "test.h" namespace { std::string TEST = "test"; } int main() { f<int>(); return 0; } Under V10.1, we get the following error: "test.h", line 7.16: 1540-0274 (S) The name lookup for "TEST" did not find a declaration. "test.cpp", line 6.15: 1540-1303 (I) "std::string TEST" is not visible. "test.h", line 5.6: 1540-0700 (I) The previous message was produced while processing "f<int>()". "test.cpp", line 11.3: 1540-0700 (I) The previous message was produced while processing "main()". We found a similar difference between g++ 3.3.2 and 4.3.2. I also found in g++, if I move the #include "test.h" to be after the unnamed namespace declaration, the compile error goes away. So here's my question: what does the Standard say about this? When a template is instantiated, is that instance considered to be declared at the point where the template itself was declared, or is the standard not that clear on this point? I did some looking though the n2461.pdf draft, but didn't really come up with anything definitive.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • Google Sites API - File Cabinets: Spaces and extension separator (.) are removed from file names

    - by user1299447
    We have a series of internal reports that we update regularly from our internal databases. We built an application in C# that uploads these reports to a Google Site. Everything works fine, except that the name of the file shown to the final user in the File Cabinet does not include the original spaces nor the extension separator (.) For example, Stock per warehouse.pdf is shown as : Stockperwarehousepdf Below is a simplified version of the code. private AtomEntry UploadAttachment(string filename, AtomEntry parent, string title, string description) { SiteEntry entry = new SiteEntry(); AtomCategory category = new AtomCategory(SitesService.ATTACHMENT_TERM, SitesService.KIND_SCHEME); category.Label = "attachment"; entry.Categories.Add(category); AtomLink parentLink = new AtomLink(AtomLink.ATOM_TYPE, SitesService.PARENT_REL); parentLink.HRef = parent.SelfUri; entry.Links.Add(parentLink); entry.MediaSource = new MediaFileSource(filename, MediaFileSource.GetContentTypeForFileName(filename)); entry.Content.Type = MediaFileSource.GetContentTypeForFileName(filename); entry.Title.Text= title; entry.Summary.Text = description; AtomEntry newEntry = null; newEntry = service.Insert(new Uri(makeFeedUri("content")), entry); } The key line is where the MediaFileSource object is created. Any idea of what we are missing? I've tried all sort of changes :(

    Read the article

  • C vs C++ function questions

    - by james
    I am learning C, and after starting out learning C++ as my first compiled language, I decided to "go back to basics" and learn C. There are two questions that I have concerning the ways each language deals with functions. Firstly, why does C "not care" about the scope that functions are defined in, whereas C++ does? For example, int main() { donothing(); return 0; } void donothing() { } the above will not compile in a C++ compiler, whereas it will compile in a C compiler. Why is this? Isn't C++ mostly just an extension on C, and should be mostly "backward compatible"? Secondly, the book that I found (Link to pdf) does not seem to state a return type for the main function. I check around and found other books and websites and these also commonly do not specify return types for the main function. If I try to compile a program that does not specify a return type for main, it compiles fine (although with some warnings) in a C compiler, but it doesn't compile in a C++ compiler. Again, why is that? Is it better style to always specify the return type as an integer rather than leaving it out? Thanks for any help, and just as a side note, if anyone can suggest a better book that I should buy that would be great!

    Read the article

  • Passing huge amounts of data as an hexadecimal (0x123AB...) parameter of a clr stored procedure in s

    - by user193655
    I post this question has followup of This question, since the thread is not recieving more answers. I'm trying to understand if it is possible to pass as a parameter of a CLR stored procedure a large amount of data as "0x5352532F...". This is to avoid to send the data directly to the CLR stored procedure, instead of sending ti to a temporary DB field and from there passing it as varbinary(max) parmeter to the CLR stored procedure. I have a triple question: 1) is it possible, if yes how? Let's say i want to pass a pdf file to the CLR stored procedure (not the path, the full bits that make up the file). Something like: exec MyCLRStoredProcs.dbo.insertfile @file_remote_path ='c:\temp\test_file.txt' , @file_contents=0x4D5A90000300000004000.... --(this long list is the file content) where insertfile is a stored proc that writes to the server path (at file_remote_path) the binary data I pass as (file_contents). 2) is it there corruption risk of adopting this approach (or it is the same approach that sql server uses behind the scenes)? 3) how to convert the content of a file into the "0x23423..." hexadecimal representation

    Read the article

  • Trouble installing R-2.15.2 on Fedora 14

    - by user1896007
    I need to install R-2.15.2, the latest version. I tried using blah> sudo yum install R to install R, but for whatever reason (maybe because it's an old version of Fedora?) my system thinks R version 13 is the most recent. So, I downloaded the .tar.gz file from R's site and used the following: blah> tar -xvf R-2.15.2.tar.gz This successfully unzipped the file. I then ran: blah> ./configure blah/R-2.15.2> ls ChangeLog COPYING Makeconf.in ONEWS src VERSION-NICK config.log doc Makefile.fw OONEWS SVN-REVISION config.site etc Makefile.in po tests configure INSTALL NEWS README tools configure.ac m4 NEWS.pdf share VERSION As you can see, makefiles are present. However, when I run "make" within the R folder, I get the following error: blah/R-2.15.2> make make: No targets specified and no makefile found. Stop. Is there any way I can fix this issue? I'm guessing people will recommend updating Fedora, but is there another way?

    Read the article

  • definition of wait-free (referring to parallel programming)

    - by tecuhtli
    In Maurice Herlihy paper "Wait-free synchronization" he defines wait-free: "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless the execution speeds on the other processes." www.cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf Let's take one operation op from the universe. (1) Does the definition mean: "Every process completes a certain operation op in the same finite number n of steps."? (2) Or does it mean: "Every process completes a certain operation op in any finite number of steps. So that a process can complete op in k steps another process in j steps, where k != j."? Just by reading the definition i would understand meaning (2). However this makes no sense to me, since a process executing op in k steps and another time in k + m steps meets the definition, but m steps could be a waiting loop. If meaning (2) is right, can anybody explain to me, why this describes wait-free? In contrast to (2), meaning (1) would guarantee that op is executed in the same number of steps k. So there can't be any additional steps m that are necessary e.g. in a waiting loop. Which meaning is right and why? Thanks a lot, sebastian

    Read the article

  • Manipulating gl_LightSource[gl_MaxLights]

    - by pion
    The The OpenGL ® Shading Language Spec version 1.20 http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.20.8.pdf shows the following: struct gl_LightSourceParameters { vec4 ambient; // Acli vec4 diffuse; // Dcli vec4 specular; // Scli vec4 position; // Ppli vec4 halfVector; // Derived: Hi vec3 spotDirection; // Sdli float spotExponent; // Srli float spotCutoff; // Crli // (range: [0.0,90.0], 180.0) float spotCosCutoff; // Derived: cos(Crli) // (range: [1.0,0.0],-1.0) float constantAttenuation; // K0 float linearAttenuation; // K1 float quadraticAttenuation;// K2 }; uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights]; Also, http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir1 shows the following code snippets: lightDir = normalize(vec3(gl_LightSource[0].position)); https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.10 shows many uniform* functions but nothing seems to deal with an array of uniform variable like gl_LightSource[0]. How do we set the gl_LightSource[0] fields in WebGL using JavaScript? For example, gl_LightSource[0].position Thanks in advance for your help. Note: Cross post on http://www.khronos.org/message_boards/viewtopic.php?f=43&t=3373

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >