Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 87/92 | < Previous Page | 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • (resolved) empty response body in ajax (or 206 Partial Content)

    - by Nikita Rybak
    Hi guys, I'm feeling completely stupid because I've spent two hours solving task which should be very simple and which I solved many times before. But now I'm not even sure in which direction to dig. I fail to fetch static content using ajax from local servers (Apache and Mongrel). I get responses 200 and 206 (depending on the server), empty response text (although Content-Length header is always correct), firebug shows request in red. Javascript is very generic, I'm getting same results even here: http://www.w3schools.com/ajax/tryit.asp?filename=tryajax_first (just change document location to 'http://localhost:3000/whatever') So, it's probably not the cause. Well, now I'm out of ideas. I can also post http headers, if it'll help. Thanks! Response Headers Connection close Date Sat, 01 May 2010 21:05:23 GMT Last-Modified Sun, 18 Apr 2010 19:33:26 GMT Content-Type text/html Content-Length 7466 Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.w3schools.com/ajax/tryit_view.asp Origin http://www.w3schools.com Response Headers Date Sat, 01 May 2010 21:54:59 GMT Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_jk/1.2.28 Etag "3d5cbdb-fb4-4819c460d4a40" Accept-Ranges bytes Content-Length 4020 Cache-Control max-age=7200, public, proxy-revalidate Expires Sat, 01 May 2010 23:54:59 GMT Content-Range bytes 0-4019/4020 Keep-Alive timeout=5, max=100 Connection Keep-Alive Content-Type application/javascript Request Headers Host localhost User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null UPDATED: I've found a problem, it was about cross-domain requests. I knew that there are restrictions, but thought they're relaxed for local filesystem and local servers. (and expected more descriptive error message, anyway) Thanks everybody!

    Read the article

  • PHP max_execution_time not timing out

    - by Joey Ezekiel
    This is not one of the regular questions if sleep is counted for timeout or stuff like that. Ok, here's the problem: I've set the max_execution_time for PHP as 15 seconds and ideally this should time out when it crosses the set limit, but it doesn't. Apache has been restarted after the change to the php.ini file and an ini_get('max_execution_time') is all fine. Sometimes the script runs for upto 200 seconds which is crazy. I have no database communication whatsoever. All the script does is looking for files on the unix filesystem and in some cases re-directing to another JSP page. There is no sleep() on the script. I calculate the total execution time of the PHP script like this: At the start of the script I set : $_mtime = microtime(); $_mtime = explode(" ",$_mtime); $_mtime = $_mtime[1] + $_mtime[0]; $_gStartTime = $_mtime; and the end time($_gEndTime) is calculated similarly. The total time is calculated in a shutdown function that I've registered: register_shutdown_function('shutdown'); ............. function shutdown() { .............. .............. $_total_time = $_gEndTime - $_gStartTime; .............. switch (connection_status ()) { case CONNECTION_NORMAL: .... break; .... case CONNECTION_TIMEOUT: .... break; ...... } } Note: I cannot use $_SERVER['REQUEST_TIME'] because my PHP version is incompatible. That sucks - I know. 1) Well, my first question obviously is is why is my PHP script executing even after the set timeout limit? 2) Apache has the Timeout directive which is 300 seconds but the PHP binary does not read the Apache config and this should not be a problem. 3) Is there a possibility that something is sending PHP into a sleep mode? 4) Am I calculating the execution time in a wrong way? Is there a better way to do this? I'm stumped at this point. PHP Wizards - please help.

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Loading jar file using JCL(JarClassLoader ) : classpath in manifest is ignored ..

    - by Xinus
    I am trying to load jar file using JCL using following code FileInputStream fis = new FileInputStream(new File( "C:\\Users\\sunils\\glassfish-tests\\working\\test.jar") ); JarClassLoader jc = new JarClassLoader( ); jc.add(fis); Class main = jc.loadClass( "highmark.test.Main" ); String[] str={}; main.getMethod("test").invoke(null);//.getDeclaredMethod("main",String[].class).invoke(null,str); fis.close(); But when I try to run this program I get Exception as Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at Main.main(Main.java:21) Caused by: java.lang.RuntimeException: Embedded startup not found, classpath is probably incomplete at org.glassfish.api.embedded.Server.<init>(Server.java:292) at org.glassfish.api.embedded.Server.<init>(Server.java:75) at org.glassfish.api.embedded.Server$Builder.build(Server.java:185) at org.glassfish.api.embedded.Server$Builder.build(Server.java:167) at highmark.test.Main.test(Main.java:33) ... 5 more According to this it is not able to locate class, But when I run the jar file explicitly it runs fine. It seems like JCL is ignoring other classes present in the jar file, MANIFEST.MF file in jar file shows: Manifest-Version: 1.0 Class-Path: . Main-Class: highmark.test.Main It seems to be ignoring Class-Path: . , This jar file runs fine when I run it using Java explicitly, This is just a test, in reality this jar file is coming as a InputStream and it cannot be stored in filesystem, How can I overcome this problem , Is there any workaround ? Thanks for any help . UNDATE: Here is a jar Main class : package highmark.test; import org.glassfish.api.embedded.*; import java.io.*; import org.glassfish.api.deployment.*; import com.sun.enterprise.universal.io.FileUtils; public class Main { public static void main(String[] args) throws IOException, LifecycleException, ClassNotFoundException { test(); } public static void test() throws IOException, LifecycleException, ClassNotFoundException{ Server.Builder builder = new Server.Builder("test"); Server server = builder.build(); server.createPort(8080); ContainerBuilder containerBuilder = server.createConfig(ContainerBuilder.Type.web); server.addContainer(containerBuilder); server.start(); File war=new File("C:\\Users\\sunils\\maventests\\simple-webapp\\target\\simple-webapp.war");//(File) inputStream.readObject(); EmbeddedDeployer deployer = server.getDeployer(); DeployCommandParameters params = new DeployCommandParameters(); params.contextroot = "simple"; deployer.deploy(war, params); } }

    Read the article

  • Mercurial local repository backup

    - by Ricket
    I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1). So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup. Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server. Between pushes of stable, tested, bug-free code, where is the backup? After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server. Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy? So then on the flip side, do you create a remote repository for every local repository and push to it all the time? How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Ubuntu: Failure to login with multiple video adapters

    - by tsilb
    Forgive my ignorance, for I am a complete linux noob. I have a computer with three video cards and six monitors. Works great on Windows. Trying to get it to run Ubuntu as well. It loads fine when I have it configured to run on one adapter; detects both screens, runs ok. But I want to turn the other 4 monitors on and run the whole thing as one extended desktop (one session, etc). So I downloaded and installed the newest ATI driver for Linux, which seems to work, kinda. I ran this to set up the screens: aticonfig --adapter=all --initial -f Now when I boot, Ubuntu seems to turn on all the screens (3 viewports, each with two cloned displays from what I can tell). When I enter my login info OR move the mouse off the main screen, the screens freeze and the kbd/ms become unresponsive. aticonfig generated xorg.conf included below. Have tried the following: aticonfig -initial -f - works, but only detects the primary adapter and 2 screens aticccle - Tells me I have to reboot after enabling the other cards. Then goes into above described freezing state. aticonfig --adapter=all --initial -f - see above Manually editing xorg.conf file with my limited knowledge - Was able to get two adapters running, but only the second adapter initialized while the primary stopped at the Ubuntu boot screen. Was unable to see the login prompt. Froze after I logged in blindly (was able to hear the login sound). Using generic "radeon" driver instead of ATI Proprietary driver with the above init attempts Toggling xinerama Various combinations of the above Hardware: Intel Core 2 Quad q6600 8GB DDR2 (3x) ATI Radeon HD 4680 5 monitors (21W, 21W, 22W Portrait, 22W Portrait, 19")and an HDTV (26"W, HDMI) in a horizontal arrangement I know next to nothing about Linux/Ubuntu aside from basic filesystem navigation, editing text files, and accessing my local and networked Windows stores and shares. Basically this is the most advanced thing I've had to do. I installed today. Please advise how to make this configuration work. my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "aticonfig-Screen[0]-0" 0 0 Screen "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-0" Screen "aticonfig-Screen[2]-0" RightOf "aticonfig-Screen[1]-0" Option "RenderAccel" "true" Option "AllowGLXWithComposite" "true" EndSection Section "Files" EndSection Section "Module" EndSection Section "ServerFlags" Option "Xinerama" "0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[1]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "aticonfig-Monitor[2]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "aticonfig-Device[1]-0" Driver "fglrx" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[2]-0" Driver "fglrx" BusID "PCI:4:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[1]-0" Device "aticonfig-Device[1]-0" Monitor "aticonfig-Monitor[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[2]-0" Device "aticonfig-Device[2]-0" Monitor "aticonfig-Monitor[2]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • rpm rollback ignoring rpms - no error output

    - by John H
    Issue rpm rollback is not working with a set of repackaged rpms created in the last couple days, but does work with more recent ones. [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# rpm -q zsh zsh-4.2.6-6.el5 [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' [root@host1 repackage]# rpm -e zsh [root@host1 repackage]# [root@host1 repackage]# ls -l zsh* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm -rw-r--r-- 1 root root 1789064 Aug 20 09:06 zsh-4.2.6-6.el5.i386.rpm [root@host1 repackage]# cp zsh-4.2.6-6.el5.i386.rpm /tmp [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:02:16 2012 (0x50323558): Preparing... ########################################### [100%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# cp /tmp/zsh-4.2.6-6.el5.i386.rpm . [root@host1 repackage]# rpm -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:06:05 2012 (0x5032363d): Preparing... ########################################### [100%] 1:zsh ########################################### [ 50%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# rpm --test -Uvh --rollback 'April 9' [root@host1 repackage]# Now, if I run my test commands with -Uvvh I get debug messages to stderror which shows me that rpm reads each of the rpm files in /var/spool/repackage. The only interesting bit is the "expected size" but after searching, the expected size should be different, as it records the files as they are on the filesystem. D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Installtid rdonly mode=0x0 D: opening db index /var/lib/rpm/Pubkeys rdonly mode=0x0 D: read h# 769 Header sanity check: OK D: ========== DSA pubkey id 53268101 37017186 (h#769) D: read h# 32 Header V3 DSA signature: OK, key ID 37017186 D: read h# 40 Header V3 DSA signature: OK, key ID 37017186 ... D: read h# 1753 Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 3628918 = lead(96)+sigs(344)+pad(0)+data(3628478) D: Actual size: 3583695 D: /var/spool/repackage/Deployment_Guide-en-US-5.2-11.noarch.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1100789 = lead(96)+sigs(344)+pad(0)+data(1100349) D: Actual size: 1109281 D: /var/spool/repackage/NetworkManager-0.7.0-10.el5_5.2.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1098167 = lead(96)+sigs(344)+pad(0)+data(1097727) D: Actual size: 1106179 D: /var/spool/repackage/NetworkManager-0.7.0-9.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 84351 = lead(96)+sigs(344)+pad(0)+data(83911) D: Actual size: 85378 ... D: Expected size: 1788276 = lead(96)+sigs(344)+pad(0)+data(1787836) D: Actual size: 1788691 D: /var/spool/repackage/zsh-4.2.6-5.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: --- erase h#1758 D: closed db index /var/lib/rpm/Pubkeys D: closed db index /var/lib/rpm/Installtid D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages D: May free Score board((nil)) I am able to copy these rpms out of the repackage directory and if I run them through cpio, extract the files. I also tried backing up and rebuilding the rpm database - no change. System Information: RHEL 5.8 rpm 4.4.2.3 /etc/yum.conf tsflags=repackage /etc/rpm/macros %_repackage_all_erasures 1

    Read the article

  • Windows 7 .NET 3.5.1 - 2.0 Slightly Corrupted, How to Repair?

    - by Quinxy von Besiex
    My Windows 7 included .NET installation (3.5 to 2.0) appears very slightly and particularly corrupted and I am trying to fix it without reinstalling Windows or trying to revert to backups. Everything was working and then my hard drive started corrupting a few files and checkdisk found bad clusters so I imaged the drive to a new one. As soon as I booted on the new drive everything worked except programs which call the System.Net.NetworkInformation methods within .NET 3.5 to 2.0 (like Ping() and IsNetworkAvailable()), which immediately crash the app in which the calls are (those calls in .NET 4.0 works fine). Those methods are found inside System.dll, and I assume call native methods which I believe are inside winnsi.dll or iphlpapi.dll or something else (I've not found this yet); I assume it calls native methods because the exception which causes the crash is Fatal Execution Engine Error which people mention is usually related to calling native methods and marshaling data between them. A huge clue about the culprit is likely found in the fact that when I launch the exact same crashing application through a code profiler (which executes the exe and captures stats on which methods took the longest) the app works fine, no crash at all! How could running it within the profiler work and running it outside not work? That seems the key to the mystery. I've used procmon to catch all the registry, filesystem, and network events from the crashing execution and the profiler-run successful execution and compared the two outputs but didn't learn much (I see the moment at which the non-profiled app crashes, but up until then they behave the same, loaded the same modules, ). The only big difference seems to be that at the moment before the app crash the profiler-executed code creates 4-6 new threads and the directly executed code only creates 1-2. I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption). I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant. I have created a new user account and cleaned up any environment variables in case environment was related. No change. I did "sfc /scannow" and it found no integrity problems. I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed. I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult. I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc. Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it. Thanks! Quinxy

    Read the article

  • Fedora 17 keeps using fedora 16 kernel

    - by MTilsted
    I did run preupgrade to upgrade my Fedora 16(x64) to Fedora 17. And it seemed to work fine. So I got the new gimp 2.8, gcc 4.7.0 and so on. But the system keeps using the old kernel from fc16. Uname -a gives me: Linux localhost.localdomain 3.3.6-3.fc16.x86_64 #1 SMP Wed May 16 21:43:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system downloaded the new kernel, so I got /boot/vmlinuz-3.3.7-1.fc17.x86_64 /boot/System.map-3.3.7-1.fc17.x86_64 /boot/initramfs-3.3.7-1.fc17.x86_64.img /boot/config-3.3.7-1.fc17.x86_64 But the system keeps using the old kernel from fc16. If i look at my /boot/grub2/grub.cfg file, it looks like this: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.3.6-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.6-3.fc16.x86_64)' linux /vmlinuz-3.3.6-3.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.6-3.fc16.x86_64.img } menuentry 'Fedora (3.3.5-2.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.5-2.fc16.x86_64)' linux /vmlinuz-3.3.5-2.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.5-2.fc16.x86_64.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### ### BEGIN /etc/grub.d/90_persistent ### ### END /etc/grub.d/90_persistent ### Anyone got a clue about why it still only references the fc16 kernel, and how I can upgrade it. My system is using raid1 on 2 disks, but /boot is not using raid. Mount for /boot is: /dev/sda2 on /boot type ext2 (rw,relatime,seclabel,user_xattr,acl,barrier=1) And / (The only other filesystem I have) is mounted as /dev/md0 on / type ext4 (rw,relatime,seclabel,user_xattr,acl,barrier=1,data=ordered)

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • No device file for partition on logical volume (Linux LVM)

    - by Brian
    I created a logical volume (scandata) containing a single ext3 partition. It is the only logical volume in its volume group (case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (/dev/sdb). When I created it, I could mount the partition via the block device /dev/mapper/case4t-scandatap1. Since last reboot the aforementioned block device file has disappeared. It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from sudo-ing. That issue has been (temporarily) rectified via this operation. Now I'll cut the chatter and get started with the terminal dumps: $ sudo pvs; sudo vgs; sudo lvs Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache $ sudo vgchange -a y Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache $ ls /dev | grep case4t case4t $ ls /dev/mapper case4t-scandata control $ sudo fdisk -l /dev/case4t/scandata Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux $ sudo parted /dev/case4t/scandata print Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3 $ sudo fdisk -l /dev/sdb Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order $ sudo parted /dev/sdb print Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV. To reiterate, neither /dev/mapper/case4t-scandatap1 (which I had used previously) nor /dev/case4t/scandata1 (as printed by fdisk) exists. And /dev/case4t/scandata (no partition number) cannot be mounted: $sudo mount -t ext3 /dev/case4t/scandata /mnt/new mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so All I get on syslog is: [170059.538137] VFS: Can't find ext3 filesystem on dev dm-0. Thanks in advance for any help you can offer, Brian P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Minecraft server Rkit ubuntu upstart [closed]

    - by user1637491
    I have an Intel server running Ubuntu Server 12.04.1 I am working on moving my CraftBukkit Minecraft Server to the new platform. I read the upstart ubuntu cookbook and wrote a .conf file I have a minecraft user (named minecraft) and its home Directory is /home/minecraft it contains prwxrwxrwx 1 minecraft minecraft 0 Sep 19 14:49 command-fifo drwx------ 8 minecraft minecraft 4096 Sep 19 14:50 HDsaves drwx------ 2 minecraft minecraft 4096 Aug 31 15:13 logrolls -rw-r--r-- 1 root root 5 Sep 19 14:49 minecraft.pid drwxrwxrwx 8 minecraft minecraft 180 Sep 19 14:49 ramdisk -rw------- 1 minecraft minecraft 119 Sep 19 10:34 save.sh drwxrwxrwx 9 minecraft minecraft 4096 Sep 19 14:50 server -rw-rw-r-- 1 minecraft minecraft 44 Aug 31 11:40 shutdown.sh the server directory contains drwxrwxrwx 6 minecraft minecraft 4096 Aug 30 13:32 Backups -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 banned-ips.txt -rwxrwxrwx 1 minecraft minecraft 17 Sep 18 12:26 banned-players.txt drwxrwxrwx 4 minecraft minecraft 4096 Aug 30 12:26 buildcraft -rwxrwxrwx 1 minecraft minecraft 1447 Sep 18 12:26 bukkit.yml -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 command-fifo drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:26 config lrwxrwxrwx 1 minecraft minecraft 23 Sep 19 14:49 craftbukkit.jar -> ramdisk/craftbukkit.jar -rwxrwxrwx 1 minecraft minecraft 17419 Sep 18 12:26 ForgeModLoader-0.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 12:24 ForgeModLoader-1.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 11:53 ForgeModLoader-2.log -rwxrwxrwx 1 minecraft minecraft 2576 Aug 30 11:05 help.yml drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:31 lib drwxrwxrwx 3 minecraft minecraft 4096 Sep 19 14:49 logrolls -rwxrwxrwx 1 minecraft minecraft 200035 Sep 4 17:58 Minecraft_RKit.jar lrwxrwxrwx 1 minecraft minecraft 12 Sep 19 14:49 mods -> ramdisk/mods -rwxrwxrwx 1 minecraft minecraft 5 Sep 18 12:26 ops.txt -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 permissions.yml lrwxrwxrwx 1 minecraft minecraft 15 Sep 19 14:49 plugins -> ramdisk/plugins lrwxrwxrwx 1 minecraft minecraft 16 Sep 19 14:49 redpower -> ramdisk/redpower -rw-r--r-- 1 root root 255 Sep 19 15:10 server.log -rwxrwxrwx 1 minecraft minecraft 464 Sep 8 11:09 server.properties drwxrwxrwx 3 minecraft minecraft 4096 Sep 5 16:05 SpaceModule drwxrwxrwx 3 minecraft minecraft 4096 Aug 30 13:07 toolkit -rwxrwxrwx 1 minecraft minecraft 1433 Sep 14 21:04 wepif.yml -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 white-list.txt lrwxrwxrwx 1 minecraft minecraft 13 Sep 19 14:49 world -> ramdisk/world lrwxrwxrwx 1 minecraft minecraft 20 Sep 19 14:49 world_nether -> ramdisk/world_nether lrwxrwxrwx 1 minecraft minecraft 21 Sep 19 14:49 world_the_end -> ramdisk/world_the_end the startup .conf file: # Starts the minecraft server after loading JRE from ramdisk # # for now im still working on it description "minecraft-server" start on filesystem or runlevel [2345] stop on runlevel [!2345] oom score -999 kill timeout 60 pre-start script sh /usr/lib/jvm/java.sh end script script cd /home/minecraft echo "$(date) Starting minecraft" sudo cp -r /home/minecraft/HDsaves/* ramdisk sudo chown -R minecraft:minecraft ramdisk sudo chmod -R 777 ramdisk sudo ln -sf ramdisk/* server sudo chown -R minecraft:minecraft server sudo chmod -R 777 server sudo mv server/server.log server/logrolls/ zip server/logrolls/temp.zip server/logrolls/server.log sudo mv server/logrolls/temp.zip server/logrolls/"$(date)".log.zip sudo rm server/logrolls/server.log sudo rm -f command-fifo sudo mkfifo command-fifo sudo chown minecraft:minecraft command-fifo sudo chmod 777 command-fifo echo "$(date) Root commands finished" echo "$(date) Starting Wrapper" cd server sudo -u minecraft java -Xmx30M -Xms30M -XX:MaxPermSize=40M -Djava.awt.headless=true -jar Minecraft_RKit.jar timv:*spoilers* <> /home/minecraft/command-fifo & sudo echo $! >| /home/minecraft/minecraft.pid echo "$(date) Minecraft Started" end script pre-stop script cd /home/minecraft PID=`cat minecraft.pid` if [ "$PID" != "" ]; then echo "Stopping MineCraft Server PID=$PID" sudo echo save-all >> command-fifo sudo echo .stopwrapper >> command-fifo wait $PID sudo rm minecraft.pid sudo rsync -rt --delete ramdisk/* HDsaves/ echo "$(date) ramdisk save complete" echo "MineCraft save-shutdown complete." else echo "MineCraft not running" fi end script so when I start it up the upstart gererated log says: Wed Sep 19 14:49:30 CDT 2012 Starting minecraft adding: server/logrolls/server.log (stored 0%) Wed Sep 19 14:49:56 CDT 2012 Root commands finished Wed Sep 19 14:49:56 CDT 2012 Starting Wrapper Wed Sep 19 14:49:56 CDT 2012 Minecraft Started

    Read the article

  • Does this prove a network bandwidth bottleneck?

    - by Yuji Tomita
    I've incorrectly assumed that my internal AB testing means my server can handle 1k concurrency @3k hits per second. My theory at at the moment is that the network is the bottleneck. The server can't send enough data fast enough. External testing from blitz.io at 1k concurrency shows my hits/s capping off at 180, with pages taking longer and longer to respond as the server is only able to return 180 per second. I've served a blank file from nginx and benched it: it scales 1:1 with concurrency. Now to rule out IO / memcached bottlenecks (nginx normally pulls from memcached), I serve up a static version of the cached page from the filesystem. The results are very similar to my original test; I'm capped at around 180 RPS. Splitting the HTML page in half gives me double the RPS, so it's definitely limited by the size of the page. If I internally ApacheBench from the local server, I get consistent results of around 4k RPS on both the Full Page and the Half Page, at high transfer rates. Transfer rate: 62586.14 [Kbytes/sec] received If I AB from an external server, I get around 180RPS - same as the blitz.io results. How do I know it's not intentional throttling? If I benchmark from multiple external servers, all results become poor which leads me to believe the problem is in MY servers outbound traffic, not a download speed issue with my benchmarking servers / blitz.io. So I'm back to my conclusion that my server can't send data fast enough. Am I right? Are there other ways to interpret this data? Is the solution/optimization to set up multiple servers + load balancing that can each serve 180 hits per second? I'm quite new to server optimization, so I'd appreciate any confirmation interpreting this data. Outbound traffic Here's more information about the outbound bandwidth: The network graph shows a maximum output of 16 Mb/s: 16 megabits per second. Doesn't sound like much at all. Due to a suggestion about throttling, I looked into this and found that linode has a 50mbps cap (which I'm not even close to hitting, apparently). I had it raised to 100mbps. Since linode caps my traffic, and I'm not even hitting it, does this mean that my server should indeed be capable of outputting up to 100mbps but is limited by some other internal bottleneck? I just don't understand how networks at this large of a scale work; can they literally send data as fast as they can read from the HDD? Is the network pipe that big? In conclusion 1: Based on the above, I'm thinking I can definitely raise my 180RPS by adding an nginx load balancer on top of a multi nginx server setup at exactly 180RPS per server behind the LB. 2: If linode has a 50/100mbit limit that I'm not hitting at all, there must be something I can do to hit that limit with my single server setup. If I can read / transmit data fast enough locally, and linode even bothers to have a 50mbit/100mbit cap, there must be an internal bottleneck that's not allowing me to hit those caps that I'm not sure how to detect. Correct? I realize the question is huge and vague now, but I'm not sure how to condense it. Any input is appreciated on any conclusion I've made.

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92  | Next Page >