Search Results

Search found 1411 results on 57 pages for 'openoffice writer'.

Page 51/57 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • XP Leveling System - PHP

    - by Michael Rich
    Rank Table ID, Primary Key RANK, The rank or level, 1 being the highest and 3 the lowest MIN_SCORE, The minimum amount of point or XP needed to reach the rank NAME, The associated name of the rank Rank Table +----+------+-----------+-------------------------+ | ID | RANK | MIN_SCORE | NAME | +----+------+-----------+-------------------------+ | 1 | 1 | 18932 | Editor-in-Chief | | 2 | 2 | 15146 | Senior Technical Writer | | 3 | 3 | 12116 | Senior Copywriter | +----+------+-----------+-------------------------+ Ranking Table ID, Primary Key FK_MEMEBER_ID, Foreign Key to member's Primary Key FK_RANK, Foreign Key to Author Rank Table's Rank column (top) SCORE, The member's current earned score or XP Ranking Table +-----+--------------+---------+-------+ | ID | FK_MEMBER_ID | FK_RANK | SCORE | +-----+--------------+---------+-------+ | 1 | 1 | 1 | 17722 | | 2 | 2 | 2 | 16257 | | 3 | 3 | 3 | 12234 | +-----+--------------+---------+-------+ In my class I have stored the ranks -- matching those in the Rank Table -- and correlating minimum scores; RANK as key and MINIMUM_SCORE as value. When a member's score (XP) is updated (up/down) I want to test that updated score against the below array to determine if their rank needs updating too. private $scores = array('3' => '12116', '2' => '15146', '1' => '18932',); Using the updated score, how could I determine the correlating rank from the above array? Everything is open to scrutiny, this is my first time creating a ranking system so I hope to get it right :)

    Read the article

  • .net mvc2 custom HtmlHelper extension unit testing

    - by alex
    My goal is to be able to unit test some custom HtmlHelper extensions - which use RenderPartial internally. http://ox.no/posts/mocking-htmlhelper-in-asp-net-mvc-2-and-3-using-moq I've tried using the method above to mock the HtmlHelper. However, I'm running into Null value exceptions. "Parameter name: view" Anyone have any idea?? Thanks. Below are the ideas of the code: [TestMethod] public void TestMethod1() { var helper = CreateHtmlHelper(new ViewDataDictionary()); helper.RenderPartial("Test"); // supposingly this line is within a method to be tested Assert.AreEqual("test", helper.ViewContext.Writer.ToString()); } public static HtmlHelper CreateHtmlHelper(ViewDataDictionary vd) { Mock<ViewContext> mockViewContext = new Mock<ViewContext>( new ControllerContext( new Mock<HttpContextBase>().Object, new RouteData(), new Mock<ControllerBase>().Object), new Mock<IView>().Object, vd, new TempDataDictionary(), new StringWriter()); var mockViewDataContainer = new Mock<IViewDataContainer>(); mockViewDataContainer.Setup(v => v.ViewData) .Returns(vd); return new HtmlHelper(mockViewContext.Object, mockViewDataContainer.Object); }

    Read the article

  • BufferedWriter overwriting itself

    - by Danson
    I want to read in a file and create a duplicate of the file but my code only write the last line in the file. How do I make so that whenever I call write(), it writes to a new line. I want to create a new file for the duplicate so I can't add true to FileWriter constructor. This is my code: //Create file reader BufferedReader iReader = new BufferedReader(new FileReader(args[1])); //Create file writer BufferedWriter oWriter = new BufferedWriter(new FileWriter(args[2], true)); String strLine; //reading file int iterate = 0; while((strLine = iReader.readLine()) != null) { instructions[iterate] = strLine; } //creating duplicate for(int i = 0; i < instructions.length; i++) { if(instructions[i] != null) { oWriter.write(instructions[i]); oWriter.newLine(); } else { break; } } try { iReader.close(); oWriter.close(); } catch (IOException e) { e.printStackTrace(); }

    Read the article

  • String doesn't match regex when read from keyboard.

    - by athspk
    public static void main(String[] args) throws IOException { String str1 = "??123456"; System.out.println(str1+"-"+str1.matches("^\\p{InGreek}{2}\\d{6}")); //??123456-true BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String str2 = br.readLine(); //??123456 same as str1. System.out.println(str2+"-"+str2.matches("^\\p{InGreek}{2}\\d{6}")); //?”??123456-false System.out.println(str1.equals(str2)); //false } The same String doesn't match regex when read from keyboard. What causes this problem, and how can we solve this? Thanks in advance. EDIT: I used System.console() for input and output. public static void main(String[] args) throws IOException { PrintWriter pr = System.console().writer(); String str1 = "??123456"; pr.println(str1+"-"+str1.matches("^\\p{InGreek}{2}\\d{6}")+"-"+str1.length()); String str2 = System.console().readLine(); pr.println(str2+"-"+str2.matches("^\\p{InGreek}{2}\\d{6}")+"-"+str2.length()); pr.println("str1.equals(str2)="+str1.equals(str2)); } Output: ??123456-true-8 ??123456 ??123456-true-8 str1.equals(str2)=true

    Read the article

  • How to sort a Pandas DataFrame according to multiple criteria?

    - by user1715271
    I have the following DataFrame containing song names, their peak chart positions and the number of weeks they spent at position no 1: Song Peak Weeks 76 Paperback Writer 1 16 117 Lady Madonna 1 9 118 Hey Jude 1 27 22 Can't Buy Me Love 1 17 29 A Hard Day's Night 1 14 48 Ticket To Ride 1 14 56 Help! 1 17 109 All You Need Is Love 1 16 173 The Ballad Of John And Yoko 1 13 85 Eleanor Rigby 1 14 87 Yellow Submarine 1 14 20 I Want To Hold Your Hand 1 24 45 I Feel Fine 1 15 60 Day Tripper 1 12 61 We Can Work It Out 1 12 10 She Loves You 1 36 155 Get Back 1 6 8 From Me To You 1 7 115 Hello Goodbye 1 7 2 Please Please Me 2 20 92 Strawberry Fields Forever 2 12 93 Penny Lane 2 13 107 Magical Mystery Tour 2 16 176 Let It Be 2 14 0 Love Me Do 4 26 157 Something 4 9 166 Come Together 4 10 58 Yesterday 8 21 135 Back In The U.S.S.R. 19 3 164 Here Comes The Sun 58 19 96 Sgt. Pepper's Lonely Hearts Club Band 63 12 105 With A Little Help From My Friends 63 7 I'd like to rank these songs in order of popularity, so I'd like to sort them according to the following criteria: songs that reached the highest position come first, but if there is a tie, the songs that remained in the charts for the longest come first. I can't seem to figure out how to do this in Pandas.

    Read the article

  • Creating UTF-8 files in Java from a runnable Jar

    - by RuntimeError
    I have a little Java project where I've set the properties of the class files to UTF-8 (I use a lot of foreign characters not found on the default CP1252). The goal is to create a text file (in Windows) containing a list of items. When running the class files from Eclipse itself (hitting Ctrl+F11) it creates the file flawlessly and opening it in another editor (I'm using Notepad++) I can see the characters as I wanted. +--------------------------------------------------+ ¦ Universidade2010 (18/18)¦ ¦ hidden: 0¦ +--------------------------------------------------¦ But, when I export the project (using Eclipse) as a runnable Jar and run it using 'javaw -jar project.jar' the new file created is a mess of question marks ???????????????????????????????????????????????????? ? Universidade2010 (19/19)? ? hidden: 0? ???????????????????????????????????????????????????? I've followed some tips on how to use UTF-8 (which seems to be broken by default on Java) to try to correct this so now I'm using Writer w = new OutputStreamWriter(fos, "UTF-8"); and writing the BOM header to the file like in this question already answered but still without luck when exporting to Jar Am I missing some property or command-line command so Java knows I want to create UTF-8 files by default ?

    Read the article

  • Send email from webpage without postback/refresh

    - by jb
    Hi all. Long time reader, first time writer. Quick query i hope someone can help me with. I have a webpage (php, javascript) which has a small contact form for a user to fill in and email us. Not a problem in itself, i have a working form already. Nothing noteworthy, just a form posting to a php page which sends an email. <form id="myform" name="myform" method="post" action="Mailer.php"> So user fills in form, hits submit and the page changes to mailer.php and they find their way back to where they were. What I want instead is for the to stay on the same page when submit is pushed. The form div to update itself to just say 'message sent' or something. I just want to avoid a full page refresh. Much like how on say facebook, commenting on a status only updates that div. Hope that is phrased clearly enough. Cheers all,

    Read the article

  • best REGEXP friendly Text Editors + most powerful REGEXP syntax?

    - by John
    I am fluent with Microsoft Visual 2005 regular expressions and they are a big time saver. I seem to learn them best by having a vaguely organized cheat sheet thrown at me, at which point I read just a little and play with them until I understand what's going on. That learning approach has worked well for me, for now. I would really like to take this to the next level though. Basically -- What is the REGEXP convention that is generally regarded as the most open-ended and powerful? VS2005 Regexps seem kind of gimped, so maybe I'm a kid playing in a sandbox. Are there text editors out there that can perform a highlight all matches, list lines containing string, or some kind of powerful function like that in conjunction with the very strongest REGEXP language? If not I can just use multiple programs and a weird technique but I'd like to avoid that. I wonder if a stronger REGEXP language or a "stronger" regEXP writer might be able to have his search match all results on all lines even by clicking a "find next" by adding some simple criteria to the search. Anyway, please provide advice!

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • Why is no encoding set in reponse by tomcat? How can I deal with it?

    - by Dishayloo
    I had recently a problem with encoding of websites generated by servlet, that occured if the servlets were deployed under tomcat, but not under jetty. I did a little bit of research about it and simplified the problem to the following servlet: public class TestServlet extends HttpServlet implements Servlet { @Override public void service(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType("text/plain"); Writer output = response.getWriter(); output.write("öäüÖÄÜß"); output.flush(); output.close(); } } If I deploy this under Jetty and direct the browser to it, it returns the expected result. The data is returned as ISO-8859-1 and if I take a look into the headers, then Jetty returns: Content-Type: text/plain; charset=iso-8859-1 The browser detects the encoding from this header. If I deploy the same servlet in a tomcat, the browser shows up strange characters. But Tomcat also returns the data as ISO-8859-1, the difference is, that no header tells about it. So the browser has to guess the encoding, and that goes wrong. My question is, is that behaviour of tomcat correct or a bug? And if it is correct, how can I avoid this problem? Sure, I can always add response.setCharacterEncoding("UTF-8"); to the servlet, but that means I set a fixed encoding, that the browser might or might not understand. The problem is more relevant, if no browser but another service accesses the servlet. So how I should deal with the problem in the most flexible way?

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • Problem reading in file written with xdr using c

    - by Inga
    I am using Ubuntu 10.4 and have two (long) C programs, one that writes a file using XDR, and one that uses this file as input. However, the second program does not manage to read in the written file. Everything looks perfectly fine, it just does not work. More spesifically it fails at the last line added here with the error message xdr_string(), which indicates that it can not read in the first line of the input file. I do no see any obvious errors. The input file is written out, have a content and I can see the right strings using stings -a -n 2 "inputfile". Anyone have any idea what is going wrong? Relevant parts of program 1 (writer): /** * create compressed XDR output stream */ output_file=open_write_pipe(output_filename); xdrstdio_create(&xdrs, output_file, XDR_ENCODE); /** * print material name */ if( xdr_string(&xdrs, &name, _POSIX_NAME_MAX) == FALSE ) xdr_err("xdr_string()"); Relevant parts of program 2 (reader): /** * open data file */ input_file=open_data_file(input_filename, "r"); if( input_file == NULL ){ ERROR(input_filename); exit(EXIT_FAILURE); } /** * create input XDR stream */ xdrstdio_create(&xdrs, input_file, XDR_DECODE); /** * read material name */ if(xdr_string(&xdrs, &name, _POSIX_NAME_MAX) == FALSE) XDR_ERR("xdr_string()");

    Read the article

  • Can you write files in Chrome 8?

    - by greggory.hz
    I'm wondering if, with the new File API exposed in Chrome (I'm not concerned with cross-browser support at this time), it would be possible to write back to files opened via a file input. You can see an example of what I'm trying to accomplish here: http://www.grehz.com/ide. I know I can use server side scripts to dynamically create the files and allow the user to download them normally. I'm hoping that there's a way to accomplish this purely client side. I had read somewhere that you can write to files opened via a file input. I haven't been able to find any examples of this, though I have seen passing references to a FileWriter class. I would be completely not surprised if this wasn't possible though (it seems likely that there are security issues with this). Just looking for some guidance or resources. UPDATE: I was reading here: http://dev.w3.org/2009/dap/file-system/file-writer.html As I was playing around in Chrome, it looks like FileSaver and FileWriter are not implemented, but BlobBuilder is. I can call getBlob() on the BB object, is there any way I can then save that without FileSave or FileWriter?

    Read the article

  • .Net file writing and string splitting issues

    - by sagar
    I have a requirement where the file should be split using a given character. Default splitting options are CRLF and LF In both these cases I am splitting the line by \r\n and \r respectively. Also I have requirement where any size of file should be processed. (Processing is basically inserting the given string in a file at given position). For this I am reading the file in chunk of 1024 bytes. Then I am applying the string.Split() method. Split() method gives options for ignoring white spaces and none. I have to add back these line break characters to the line. for this I am using a binary writer and I am writing the byte array to the new file. Issue:- 1) When line break is CRLF, and the split option is NONE, while spaces are also added in the splitted array. Second option is given (to ignore white spaces) CRLF works properly. 2)Bit ignoring white space option creates other problems, as I am reading the file byte by byte I can't ignore a white space. 3)When line break characters are other than default(e.g. '|', a null value is prepended to the resulting line. Can anybody give solution to my issues?

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • Abnormal hangs and restarts Ubuntu 8.04

    - by jai-ho
    Hi, I am using Ubuntu 8.04 LTS and seeing the following behaviors: The system hangs after a while and becomes completely unresponsive. The system sometimes restarts itself ! Can you please help me identify what is the problem? Also please mention where should I look for the possible cause of this error. Thanks. EDIT: Got the following from the dmesg output (the system got hung and had to restart) [ 15.452015] Driver 'sr' needs updating - please use bus_type methods [ 15.456882] Driver 'sd' needs updating - please use bus_type methods [ 15.457987] sr0: scsi3-mmc drive: 52x/52x writer cd/rw xa/form2 cdda tray [ 15.457993] Uniform CD-ROM driver Revision: 3.20 [ 15.458058] sr 0:0:1:0: Attached scsi CD-ROM sr0 [ 15.463028] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463051] sd 1:0:0:0: [sda] Write Protect is off [ 15.463055] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463083] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463151] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463167] sd 1:0:0:0: [sda] Write Protect is off [ 15.463171] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463197] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463202] sda:<5sr 0:0:1:0: Attached scsi generic sg0 type 5 [ 15.464634] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 15.470120] sda1 sda2 < sda5 [ 15.495536] sd 1:0:0:0: [sda] Attached SCSI disk [ 15.759549] Attempting manual resume [ 15.759554] swsusp: Resume From Partition 8:5 [ 15.759556] PM: Checking swsusp image. [ 15.759742] PM: Resume from disk failed. [ 15.779964] EXT3-fs: INFO: recovery required on readonly filesystem. [ 15.779970] EXT3-fs: write access will be enabled during recovery. [ 19.904204] kjournald starting. Commit interval 5 seconds [ 19.904235] EXT3-fs: sda1: orphan cleanup on readonly fs [ 19.904245] ext3_orphan_cleanup: deleting unreferenced inode 303260 [ 19.904304] ext3_orphan_cleanup: deleting unreferenced inode 303329 [ 19.932763] ext3_orphan_cleanup: deleting unreferenced inode 3801871 [ 19.932785] ext3_orphan_cleanup: deleting unreferenced inode 3801874 [ 19.932798] ext3_orphan_cleanup: deleting unreferenced inode 3801910 [ 19.951253] ext3_orphan_cleanup: deleting unreferenced inode 3801912 [ 19.951266] ext3_orphan_cleanup: deleting unreferenced inode 3801914 [ 19.951278] ext3_orphan_cleanup: deleting unreferenced inode 3959212 [ 19.951299] ext3_orphan_cleanup: deleting unreferenced inode 3959213 [ 19.960335] ext3_orphan_cleanup: deleting unreferenced inode 3959215 [ 19.963531] ext3_orphan_cleanup: deleting unreferenced inode 3801875 [ 19.963545] ext3_orphan_cleanup: deleting unreferenced inode 3663727 [ 19.963565] ext3_orphan_cleanup: deleting unreferenced inode 3663708 [ 19.963577] ext3_orphan_cleanup: deleting unreferenced inode 4072122 [ 19.963597] ext3_orphan_cleanup: deleting unreferenced inode 4072157 [ 19.968616] ext3_orphan_cleanup: deleting unreferenced inode 4072159 [ 19.970252] ext3_orphan_cleanup: deleting unreferenced inode 4072160 [ 19.970264] ext3_orphan_cleanup: deleting unreferenced inode 4072161 [ 19.992889] ext3_orphan_cleanup: deleting unreferenced inode 4072264 [ 19.992903] ext3_orphan_cleanup: deleting unreferenced inode 4072267 [ 19.999585] ext3_orphan_cleanup: deleting unreferenced inode 4072268 [ 20.008329] ext3_orphan_cleanup: deleting unreferenced inode 4072270 [ 20.008343] ext3_orphan_cleanup: deleting unreferenced inode 4072123 [ 20.008360] ext3_orphan_cleanup: deleting unreferenced inode 4072452 [ 20.008374] ext3_orphan_cleanup: deleting unreferenced inode 4072453 [ 20.008385] ext3_orphan_cleanup: deleting unreferenced inode 4072124 [ 20.008398] ext3_orphan_cleanup: deleting unreferenced inode 311574 [ 20.008413] ext3_orphan_cleanup: deleting unreferenced inode 967890 [ 20.008420] EXT3-fs: sda1: 28 orphan inodes deleted [ 20.008423] EXT3-fs: recovery complete. [ 20.082622] EXT3-fs: mounted filesystem with ordered data mode. [ 29.025379] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 29.187133] Linux agpgart interface v0.102 [ 29.225338] iTCO_vendor_support: vendor-support=0 [ 29.259662] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.02 (26-Jul-2007)

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • DVD Drive Failing on Windows 7

    - by Seth Spearman
    Hello, I have x64 Windows 7 running on an ASUS M50VM. The DVD drive works completely unreliably if not at all. But the story is not that simple so bear with me...here are the gory details. When I first got the machine it came with Windows XP and I upgraded it to Windows Vista X64 and the DVD worked fine. When Windows 7 RC2 came out I tried it on a Virtual Machine and I liked it so much that I upgraded the machine to Win7 RC1. The DVD worked fine. Of course, RC1 was going to start spontaneously rebooting, so when Windows 7 was released I DID A CLEAN INSTALL of Windows 7. Just to clarify...by clean install I mean I did a FORMAT of the HARD DRIVE and INSTALLED it from scratch. EVER since then the DVD mostly doesn't work. I can sometime read from disk but that will often hang. (Please see my description below of HANG for details.) CD or DVD writes ALWAYS fail with a HANG (I have done a successful write only one time.) Here is what I mean by HANG... *Explorer Window is unresponsive. *Any software accessing the DVD drive is unresponsive. *The DVD tray will not eject. *Using a paper clip will eject but the disk is usually spinning real hard. *Attempting to shut down windows will fail. I have waited as long as ten minutes but the whole OS seems to hang. I do a hard shutdown. *Sometimes accessing the DVD (when it does not cause a HANG) will still fail and the device will actually seem to disappear from the system until I reboot. A couple of other things. It is NOT a hardware failure. It is the Windows OS. I know this because I swapped out my DVD drive with a friend with the same model...his machine is fine (he is still running Vista X64) and my machine still fails. For what it is worth. I swapped out my primary disk with the INTEL 160GB SSD. EDIT Here is what System Information shows about my DVD drive Drive D: Description CD-ROM Drive Media Loaded No Media Type DVD Writer Name HL-DT-ST DVDRAM GSA-T50N ATA Device Manufacturer (Standard CD-ROM drives) Status OK Transfer Rate -1.00 kbytes/sec SCSI Target ID 0 PNP Device ID IDE\CDROMHL-DT-ST_DVDRAM_GSA-T50N________________RR04____\5&2B5B7F1D&0&1.0.0 Driver c:\windows\system32\drivers\cdrom.sys (6.1.7600.16385, 144.00 KB (147,456 bytes), 7/13/2009 7:19 PM) Any ideas? HELP! Seth B Spearman

    Read the article

  • DVD Drive Failing on Windows 7

    - by Seth Spearman
    I have x64 Windows 7 running on an ASUS M50VM. The DVD drive works completely unreliably if not at all. But the story is not that simple so bear with me...here are the gory details. When I first got the machine it came with Windows XP and I upgraded it to Windows Vista X64 and the DVD worked fine. When Windows 7 RC2 came out I tried it on a Virtual Machine and I liked it so much that I upgraded the machine to Win7 RC1. The DVD worked fine. Of course, RC1 was going to start spontaneously rebooting, so when Windows 7 was released I DID A CLEAN INSTALL of Windows 7. Just to clarify...by clean install I mean I did a FORMAT of the HARD DRIVE and INSTALLED it from scratch. EVER since then the DVD mostly doesn't work. I can sometime read from disk but that will often hang. (Please see my description below of HANG for details.) CD or DVD writes ALWAYS fail with a HANG (I have done a successful write only one time.) Here is what I mean by HANG... *Explorer Window is unresponsive. *Any software accessing the DVD drive is unresponsive. *The DVD tray will not eject. *Using a paper clip will eject but the disk is usually spinning real hard. *Attempting to shut down windows will fail. I have waited as long as ten minutes but the whole OS seems to hang. I do a hard shutdown. *Sometimes accessing the DVD (when it does not cause a HANG) will still fail and the device will actually seem to disappear from the system until I reboot. A couple of other things. It is NOT a hardware failure. It is the Windows OS. I know this because I swapped out my DVD drive with a friend with the same model...his machine is fine (he is still running Vista X64) and my machine still fails. For what it is worth. I swapped out my primary disk with the INTEL 160GB SSD. EDIT Here is what System Information shows about my DVD drive Drive D: Description CD-ROM Drive Media Loaded No Media Type DVD Writer Name HL-DT-ST DVDRAM GSA-T50N ATA Device Manufacturer (Standard CD-ROM drives) Status OK Transfer Rate -1.00 kbytes/sec SCSI Target ID 0 PNP Device ID IDE\CDROMHL-DT-ST_DVDRAM_GSA-T50N________________RR04____\5&2B5B7F1D&0&1.0.0 Driver c:\windows\system32\drivers\cdrom.sys (6.1.7600.16385, 144.00 KB (147,456 bytes), 7/13/2009 7:19 PM) Any ideas? HELP! Seth B Spearman

    Read the article

  • Why is my new Phenom II 965 BE not significantly faster than my old Athlon 64 X2 4600+?

    - by Software Monkey
    I recently rebuilt my 5 year old computer. I upgraded all core components, in particular from an Athlon 64 X2 4600+ at 2.4 GHz with DDR2 800 to a Phenom II 965 BE (quad core) at 3.6 GHz with DDR3 1333 (actually 1600, but testing consistently detected memory errors at 1600). The motherboard is also much newer and better. The HDD's (x3), DVD writer and card reader are the same. The BIOS memory config is auto-everything except the base timing which I overrode to 1T instead of 2T. The BIOS CPU multiplier is slightly over-clocked to 3.6 GHz from the stock 3.4 GHz. I noticed compiling Java is slower than I expected. As it happens I have some (single-threaded) Java pattern-matching code which is CPU and memory bound and for which I have performance numbers recorded on a number of hardware platforms, including my old system. So I did a test run on the new equipment and was stunned to find that the numbers are only slightly better than my old system, about 25%. The data set it is operating on is a 148,975 character array, which should easily fit in caches, but in any event the new CPU has larger caches all around. The system was, of course, otherwise idle for the test and the test run is a timed 10 seconds to eliminate scheduling anomalies. A long while ago, when I upgraded only memory from DD2 667 to DDR2 800 there was no change in performance of this test, which subjectively supports that the test cycle does not need to (significantly) access main memory, but yes it is creating and garbage collecting a large number of objects in the process of this test (low millions of matches are found for the pattern set). I am about 99.999% certain the code hasn't changed since I last ran it on 2009-03-17 - but I can't easily retest the old hardware, because it is currently in pieces on my work-bench waiting to be built into a new computer for my kids. Note that Windows (XP) reports a CPU speed of 795 MHz unless I have some thing running. With stuff running it seems to jump all over the place each time I use ALT-Pause to display the system properties, everywhere from 795 MHz to 3.4 Ghz. So why might my shiny new hardware under-performing so badly? EDIT: The old memory was Mushkin DDR2 800 with timings set for auto which should have been 5-5-5-12. The new memory is Corsair DDR3 1600, running at 1333 with timings also auto which are 9-9-9-21. In both cases they are a paired set of dual channel DIMMs. I was waiting to ensure my system was stable before tweaking with memory timings.

    Read the article

  • Why MySQL sat for 2 minutes doing nothing?

    - by Alex R
    This was a one-time thing, not reproducible... But I saved the show innodb status output. Can anybody tell what's going on here? The simple insert took almost 3 minutes to complete. | InnoDB | | ===================================== 110201 15:58:10 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 34 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 11963, signal count 11766 --Thread 1824 has waited at .\btr\btr0cur.c line 443 for 118.00 seconds the sema phore: S-lock on RW-latch at 09D6453C created in file .\buf\buf0buf.c line 550 a writer (thread id 1824) has reserved it in mode wait exclusive number of readers 1, waiters flag 1 Last time read locked in file .\buf\buf0flu.c line 599 Last time write locked in file .\btr\btr0cur.c line 443 Mutex spin waits 0, rounds 527817, OS waits 7133 RW-shared spins 2532, OS waits 1226; RW-excl spins 1652, OS waits 1118 ------------ TRANSACTIONS ------------ Trx id counter 0 95830 Purge done for trx's n:o < 0 95814 undo n:o < 0 0 History list length 11 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, OS thread id 3704 MySQL thread id 551, query id 2702112 localhost 127.0.0.1 root show innodb status ---TRANSACTION 0 95829, not started, OS thread id 3132 MySQL thread id 534, query id 2702020 localhost 127.0.0.1 root ---TRANSACTION 0 95828, not started, OS thread id 3152 MySQL thread id 527, query id 2701973 localhost 127.0.0.1 root ---TRANSACTION 0 95827, ACTIVE 118 sec, OS thread id 1824 inserting, thread decl ared inside InnoDB 500 mysql tables in use 1, locked 1 1 lock struct(s), heap size 320, 0 row lock(s) MySQL thread id 526, query id 2701972 localhost 127.0.0.1 root update INSERT INTO log_searchcriteria (userid,search_criteria,date,search_type) VALUES ( NAME_CONST('userid',NULL), NAME_CONST('search_criteria',_latin1' SELECT SQL_C ALC_FOUND_ROWS idx_search.CTCX_LATITUDE, idx_search.CTCX_LONGITUDE, idx_search.b uilding_id, idx_search.LN_LIST_NUMBER, idx_search.LP_LIST_PRICE, idx_search.HSN_ ADRESS_HOUSE_NUMBER, idx_search.STR_ADDRESS_STREET, idx_search.CP_ADDRESS_COMPAS S_POINT, idx_search.UN_UNIT, idx_search.CIT_CITY, idx_search.ZP_ZIP_CODE, idx_se arch.AR_AREA_NAME, idx_search.BR_BEDROOMS, idx_search.BTH_BATHS, idx_search.ST_S TATUS, idx_search.CTCX_STYLE_TYPE, idx_s -------- FILE I/O -------- I/O thread 0 state: wait Windows aio (insert buffer thread) I/O thread 1 state: wait Windows aio (log thread) I/O thread 2 state: wait Windows aio (read thread) I/O thread 3 state: wait Windows aio (write thread) Pending normal aio reads: 0, aio writes: 1, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 151006 OS file reads, 120758 OS file writes, 6844 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 5, seg size 7, 24664 inserts, 24664 merged recs, 4612 merges Hash table size 553253, node heap has 629 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s --- LOG --- Log sequence number 5 2318193115 Log flushed up to 5 2318193115 Last checkpoint at 5 2318129891 0 pending log writes, 0 pending chkp writes 3036 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 213459462; in additional pool allocated 1720192 Dictionary memory allocated 240416 Buffer pool size 8192 Free buffers 0 Database pages 7563 Modified db pages 18 Pending reads 0 Pending writes: LRU 0, flush list 18, single page 0 Pages read 150973, created 28788, written 115137 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread id 2992, state: flushing buffer pool pages Number of rows inserted 794294, updated 89203, deleted 13698, read 1453084305 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ Thanks

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57  | Next Page >