Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 534/1051 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • How to Implement Web Based Find File Database Via Text Search

    - by neversaint
    I have series of files like this: foo1.txt.gz foo2.txt.gz bar1.txt.gz ..etc.. and a tabular format file that describe those files: foo1 - Explain foo1 foo2 - Explain foo2 bar1 - Explain bar1 ..etc.. What I want to do is to have a website with a simple search bar and allow people to type foo1 or just foo and finally return the gzipped file(s) and the related explanation of the file(s). What's the best way to implement this and what kind of tools should I use. Sorry I am totally new in this area.

    Read the article

  • Setting up separate ctags db's for C/C++ standard libs, boost, and third party libs

    - by Robert S. Barnes
    I want to set up separate ctags databases for various libraries in /usr/include/ for use with OmniCppComplete. The idea is to be able to pull in only the libraries needed for a particular project in the target language - C or C++. For example, I'd like to have one database for the standard C libraries, one for system libraries that might be used by either C or C++ programs ( sockets / networking comes to mind ) one for the standard C++ libs / STL / Boost, and then other databases for various third party libraries such as QT or glib. Then I could pull something in simply by typing set tags+= ~/.vim/somelib.tags in vim. I assume that everything related to the C++ stdlib and STL are in the /usr/include/c++ and that Boost is all in /usr/include/boost. Unfortunately it seems that the standard C libs and system libs are just kind of dumped directly into /usr/include/ with a variety of other stuff. How can I get a list of which files and directories belong to which libs? I'm on Ubuntu 8.04.

    Read the article

  • Howto Plot "Reverse" Cumulative Frequency Graph With ECDF

    - by neversaint
    I have no problem plotting the following cumulative frequency graph plot like this. library(Hmisc) pre.test <- rnorm(100,50,10) post.test <- rnorm(100,55,10) x <- c(pre.test, post.test) g <- c(rep('Pre',length(pre.test)),rep('Post',length(post.test))) Ecdf(x, group=g, what="f", xlab='Test Results', label.curves=list(keys=1:2)) But I want to show the graph in forms of the "reverse" cumulative frequency of values x ? (i.e. something equivalent to what="1-f"). Is there a way to do it? Other suggestions in R other than using Hmisc are also very much welcomed.

    Read the article

  • comparing two files and merge the data

    - by Ganz Ricanz
    I have the below files, total.txt order1,5,item1 order2,6,item2 order3,7,item3 order4,6,item4 order8,9,item8 changed.txt order3,8,item3 order8,12,item8 total.txt is total order data and changed.txt is recently changed data. I want to merge the recent change with total, i want the output as , Output.txt order1,5,item1 order2,6,item2 order3,8,item3 order4,6,item4 order8,12,item8 Note : 2nd column of (3rd & 5th) row of the total.txt is updated with changed.txt file i have used the below nawk to compare the first coulmn, but not able to print it to the output file. Please help on complete the below command nawk -F"," 'NR==FNR {a[$1]=$2;next} ($1 in a) "print??"' total.txt changed.txt

    Read the article

  • Kernel error causing cpu to go into shutdown state

    - by EpsilonVector
    What kind of Kernel error can cause the cpu to go into a shut down state? I'm doing a homework assignment in OS, and we did changes in sched.c (adding a new scheduling policy, which involved ading another prio_array to the queue and switching between them when needed). Processes using this policy cause the cpu to enter a shut down state when they finish. Any suggestions where to look?

    Read the article

  • overview/history of resident memory usage

    - by kapet
    I have a fairly complicated program (Python with SWIG'ed C++ code, long running server) that shows a constantly growing resident memory usage. I've been digging with the usual tools for the leak (valgrind, Pythons gc module, etc.) but to no avail so far. I'm a bit afraid that the actual problem is memory fragmentation within Python and/or libc managed memory. Anyway, my question is more specific right now: Is there a tool to visualize resident memory usage and ideally show how it develops over time? I think the raw data is in /proc/$PID/smaps but I was hoping there's some tool that shows me a nice graph of the amounts used by mmap'ed files vs. anonymous mmap'ed memory vs. heap over time so that it's easier to see (literally) what's changing. I couldn't find anything though. Does anybody know of a ready to use tool that graphs memory usage over space and time in an intuitive way?

    Read the article

  • PHP Default Timezone issue on Fedora + Zend Server CE

    - by Dave Morris
    I have ZendServer CE (PHP 5.2) installed on a Fedora VM, and I have the system timezone set to 'America/Chicago'. I have date.timezone = 'UTC' in my php.ini file, and when I call date_default_timezone_get(), or display date('T') on a web page, it says 'CDT'. The documentation on php.net for date_default_timezone_get() says it follows this order when choosing a default timezone: - Reading the timezone set using the date_default_timezone_set() function (if any) - Reading the TZ environment variable (if non empty) - Reading the value of the date.timezone ini option (if set) - Querying the host operating system (if supported and allowed by the OS) If I change the system timezone through the 'setup' GUI, and reboot the server, date('T') returns whatever I changed the system timezone to, regardless of what php.ini says. I also don't have a TZ environment variable, and I am not currently using date_default_timezone_set() anywhere in my code. Any idea what might be going on? I realize I can always override the system timezone by calling date_default_timezone_set('UTC'), but I'd rather rely on the php.ini file if possible. Thanks for the help, Dave

    Read the article

  • CVS in cmd/gui works only the third time I run a command.

    - by Somebody still uses you MS-DOS
    I'm using CVS in the command line. I'm in my repository folder. When I call a CVS command, I get... cvs [log aborted]: unrecognized auth response from localhost: -f [pserver aborted]: /opt/cvs/XXXXXX: no such repository ...2 times. The third time I run the command, it works with no problems. I tried to use a GUI client (CrossVC) and the same problem occurs. I tried inside gVim and Vim using VCSCommand and I'm having the same issues as well. I've tested with different times between each command, but I still have the same problems. I'm using a CVS configuration with stunnel. Why am I having problem with this setup? Why every time just the third time that I try to run the command that actually works?

    Read the article

  • EXMPP Buillding Error

    - by pradeepchhetri
    I am trying to install exmpp but while building i am getting the following error: exmpp_tls_openssl.c: In function 'init_library': exmpp_tls_openssl.c:622: error: 'SSL_OP_NO_TICKET' undeclared (first use in this function) exmpp_tls_openssl.c:622: error: (Each undeclared identifier is reported only once exmpp_tls_openssl.c:622: error: for each function it appears in.) make[2]: *** [exmpp_tls_openssl_la-exmpp_tls_openssl.lo] Error 1 I have openssl-dev and openssl both installed. Can someone please tell me what is the problem.

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • Solr; What does this mean?

    - by Camran
    At the end of the README.txt file which is located in the example directory under solr, I find this line: NOTE: This Solr example server references SolrCell jars outside of the server directory with statements in the solrconfig.xml. If you make a copy of this example server and wish to use the ExtractingRequestHandler (SolrCell), you will need to copy the required jars into solr/lib or update the paths to the jars in your solrconfig.xml What does this mean? Do I have to make some adjustment before uploading solr to my server? Also, if you know, what is Solr-nightly:s difference to regular solr? The tutorial states "solr-nightly.zip" but on their download section I cant find it.

    Read the article

  • Can not set password for mysql server in cent os 6.2

    - by HarshanaD
    I have installed mysql and then mysql-server. Then i start the mysql demon and follow below steps, # chkconfig --level 2345 mysqld on # mysqladmin -u root password testpassword But i can not set the password because it gives me the below error, Access denied for user root@localhost (using password: no) I logged in as root user and perform those steps. I even uninstalled mysql server and reinstalled but same problem occurred.

    Read the article

  • Optimal directory structure for filesystem

    - by Pankaj
    We have large scale web application which has millions of customer. Each customer can have document based on document type. We may have 20-30 types of documents. We are planning to use GlusterFS for storing these documents. I'm trying to find out what are the limitations of Gluster as far as number of files/directories ? Do we need to have hierarchical directory structure ? What would be the optimal directory structure ? Does this make sense - CustmerId Documenttype File1 File2

    Read the article

  • How to store and echo multiple lines elegantly in bash?

    - by EmpireJones
    I'm trying to capture a block of text into a variable, with newlines maintained, then echo it. However, the newlines don't seemed to be maintained when I am either capturing the text or displaying it. Any ideas regarding how I can accomplish this? Example: #!/bin/bash read -d '' my_var <<"BLOCK" this is a test BLOCK echo $my_var Output: this is a test Desired output: this is a test

    Read the article

  • How to grep lines having specific format.

    - by Nitin
    I have got a file with following format. 1234, 'US', 'IN',...... 324, 'US', 'IN',...... ... ... 53434, 'UK', 'XX', .... ... ... 253, 'IN', 'UP',.... 253, 'IN', 'MH',.... Here I want to extract only those lines having 'IN' as 2nd keyword. i.e. 253, 'IN', 'UP',.... 253, 'IN', 'MH',.... Can any one please tell me a command to grep it.

    Read the article

  • How to implement/debug a sensor driver in ANDROID

    - by CVS-2600Hertz-wordpress-com
    Does anyone know of a walk-through or any examples of any code to setup sensors in android. I have the drivers available to me. Also i have implemented the sensors library as instructed in the Android-Reference along the sensors.h template. I am still unable to get any response at the apps level. How do i trace this issue? what might be the problem? Thanks in advance UPDATE: Jorgesys's link below points to a great APP to test if the sensor drivers are functioning properly or not. Not that i know they are not functioning, Any ideas of on where to dig??...

    Read the article

  • filter log file by defining regexes

    - by fmpdmb
    I have some HUGE log files (50Mb; ~500K lines) I need to start filtering some of the crap out of. The log files are being produced using log4j and have the basic pattern of: [log-level] date-time class etc, etc log-message I'm looking for a way that I can identify a regex start and regex end (or something similar) that will filter out the matching entries from the file so I can more easily wade through these massive files. I'm sure I could write a java program to accomplish this task, but I thought I'd ask the community before going down that path. Thanks in advance.

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • ios::nocreate error while compiling a C++ code

    - by Mohit Nanda
    While, compiling a package, written in C++ on RHEL 5.0. I am getting the following error. error: nocreate is not a member of std::ios The source-code corresponds to: ifstream tempStr(argv[4],ios::in|ios::nocreate); I have tried #g++ -O -Wno-deprecated <file.cpp> -o <file> as well as: #g++ -O -o <file> Please suggest a solution.

    Read the article

  • Would it be simply better to use the system's functions rather than use the language?

    - by Nullw0rm
    There are many scenarios where I've questioned PHP's performance with some of its functions, and whether I should build a complex class to handle specific things using its seemingly slow tools. For example, Complex regular expressions with sed and processing with awk would seemingly be exponential in performance rather than making PHP's regular expression and seemingly excessive functions parse and in time manage to finish it. If I were to do a lot of network tasks such as MX lookups/DIGging/retrieving simultaneously I would rather pass it via system() and let the OS handle it itself. There are simply too many functions in PHP, that are inefficient and result in slow pages or can be handled easier by the OS. What are your opinions? Do you think I should do the hard work with the OS in its own/custom functions?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >