Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 458/991 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Cannot open /dev/rfcomm1 : Host is down

    - by srj0408
    I am working on raspberry PI and on Bluetooth. I am using old raspberry pi kernel as the new one has got some bugs that were not resolved with respect to the bluez daemon. At present my kernel version is 3.6.11. I am using a USB bluetooth dongle and my sole purpose is to auto connect the bluetooth dongle when ever it is in range. For that i think i have to run a script in the backend on RPI that will keep on checking the existence of usb bluetooth dongle. I started from the very scratch. I installed bluez daemon using apt-get install bluetooth bluez utils blueman and then i used hciconfig which gives me that my bluetooth usb dongle is working fine. But when i did hcitool scan , it give me no device in range even though my Serial bluetooth Device was on. I wasn't able to find any device in vicinity. Also when i unplugged and plug the USB dongle again, i was able to scan the serial device , but when i repeat the process, i find the earlier condition of not finding any deice. I had find another useful link, but that need address of the bluetooth device that need to be connected. I want to automate this using hcitool scan, storing the output to the a file and then comparing it with already paired devices and their name. For that i need to figure out why hcitool scan is sometime working and sometime not. ? Can some one help me in figuring out why this is happening. Is there any problem on hardware side i.e Bluetooth dongle is buggy or i had some problem in bluez utils. Edit 1: While as of now, hcitool scan is giving me my remote device address but still i am getting the same issue of HOUST IS DOWN, '/dev/rfcomm1'. I am really not getting any idea of what to be done.

    Read the article

  • how to find out what is causing huge dentry_cache usage?

    - by Piavlo
    Note that inode_cache & ext3_inode_cache slabs are very small compared to dentry_cache. What happens is that slowly and steadily the within a week dentry_cache grows from 1M to ~5-6G Then I need to run echo 2 > /proc/sys/vm/drop_caches && echo 0 > /proc/sys/vm/drop_caches This started to happening one day on all servers hosting some web code - the developers are saying that they have not changed anything related to filesystem access pattern around the time then the problem started. The system is centos5 with 2.6.18 kernel so I don't have any instrumentation features available th newer kernels. Any I idea how I can debug the problem? maybe with systemtap? This is a ec2 instance - so not even sure that systemtap will work there. Thanks Alex

    Read the article

  • can't open large rss remote files

    - by anarchOi
    I'm setting up a rss feed in vbulletin to import the entries into the forum. The rss is very large (2000+ entries) and it is hosted on a different server (i have control on it). Problem is that i cannot add this feed into vBulletin, i am getting a weird error. If i edit the feed and remove half of the entries then it will work correctly, which makes me think it is because the file is too large. I suppose i have to change a setting in php.ini or something to allow bigger files to be opened, but what do i need to look for ? Thanks I'm on Debian.

    Read the article

  • Why is this server redirecting to another page???

    - by Mike L.
    I am building a site for a client. For a reason unknown to me www.domain.com forwards to www.domain.com/directory/home.html. If i type www.domain.com/index.php it works correctly. I have checked .htaccess there was nothing there, so I set the index to index.php which works fine in every directory other than the root directory. I have root access and have checked the httpd.conf (did a search in VI for the document that I was being redirected to) and anything else I could think of. Where should I look next? The server is a VPS running CentOS 5.5 with multiple domains, has CPanel WHM 11 for root access and CPanel X installed for each domain.

    Read the article

  • How to run script from root as another user (with user PATH)

    - by Sandra
    I would like to have these commands run as the ss user from root mkdir bin cp -r /opt/gitolite . gitolite/install -ln gitolite setup -pk ss.pub mkdir -p .gitolite/hooks/common ln -s /opt/pre-receive .gitolite/hooks/common/ so everything is executed in /home/ss. The 4th line requires $HOME/bin as you can see from the 3rd line. The only way I can get it to work is by adding su -c "command" ss to each line, which is not a nice hack. This is an extension to my previous question, where I wasn't precise enough. Question How do I run all these commands as a script in a practical way?

    Read the article

  • Ubuntu 10.10, taskbar

    - by Alex
    I launched system monitor to kill one program, which didn't responded on any mouse clicks, etc. But i occasionally killed another process. so, taskbar was killed. (it was in the bottom of the screen, in the top all is good) reboot didnt help o_O. Now I use Alt-Tab and alt-controll-arrows to switch between programs and desktops (it works). How to launch taskbar again? its very strange that reboot didnt help me.

    Read the article

  • Apache mod_proxy_ajp and tomcat7 (TomEE). Telnet 8009 from localhost works, but from other machine connection refused

    - by exabrial
    In my tomcat config, I have the following: <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> Once I start tomcat, on that same box, I can telnet localhost 8009 and get a connection. However, on the load balancer, I cannot telnet to that port. I've disabled the firewalls on both boxes. I'm able to connect on port 8080. What gives???

    Read the article

  • Xubuntu 12.04: Slow Login

    - by naxchange
    I'm running Xubuntu 12.04, and after I ran the update, my login started slowing down big time. I've dabbled a little bit with the programs in Settings - Settings Manager - Session and Startup, and tried closing and opening these programs, and then rebooting. Anyways, I narrowed it down to a combination of 2 processes, "Network" (the connection manager) and "Xfce Volume Daemon". Their corresponding terminal commands are: ~$ nm-applet ~$ xfce4-volumed If I disable them, and then run them after login is complete, everything works just fine. Is there a way for me to run these commands automatically at start up? I want to maybe write a shell script and save it somewhere, or at least edit the order things happen after login, so the desktop can load before these processes. Hope I've been clear enough, and thanks in advance! NAX

    Read the article

  • The amount of memory used by each process

    - by tuxsmouf
    I have a mysql server running debian with 2GO of RAM. I would like to know the amount of memory used by each process. I thought ps -aux was the command and options for it. But I only see 90 MO used by several processes and free -m tells me that 1400 MO are used. Is there a way to have a better view with the processes and the memory used by them ?

    Read the article

  • GitHub updating repository?

    - by user1804933
    I am trying to setup GitHub on my server and gotten to the point where I am running the command "git push -u origin master". However, a large file was detected and the following error was received: remote: error: GH001: Large files detected. remote: error: Trace: 5520a70fd2eeaa2eafd7de049a590fb5 remote: error: See http://git.io/iEPt8g for more information. remote: error: File app/logs/dev.log is 2041.59 MB; this exceeds GitHub's file size limit of 100 MB I ended up deleting that file and tried adding the git again but I keep running into that error. Any ideas on how to work around this?

    Read the article

  • what TERM to use to get rid of color escape codes?

    - by slivu
    Is there a way to get rid of escape codes in terminal output? Say even if the script are sending that codes they are ignored by terminal and text displayed as is, without colors, bolds etc. I need to display terminal output on a HTML page. For now i'm using javascript to remove escape codes, but it becomes clunky cause i receive output by chars, and have to wait until all content received then update it, leading in weird effects.

    Read the article

  • command line LVM issue on CentOS 5

    - by alex-M
    I am able to create from using lvm GUI to do as follows: /dev/var-v0l/var /var ext3 defaults 1 2 /dev/varopt-vol/var-opt /var/opt ext3 defaults 1 2 $df /dev/mapper/var-v0l 103208224 1881092 96084460 2% /var /dev/mapper/varopt-vol 103208224 192252 97773300 1% /var/opt but using command line LVM I created I can not do as the above $ df df: `/var/opt': No such file or directory /dev/mapper/var-v0l 103208224 1881092 96084460 2% /var What am I missing.

    Read the article

  • Slow data transfer using SSH

    - by Floste
    The server is an ubuntu server 11.04 with sshd. SSH works fine for console programs. But data transfer is slow, which is very annoying when transferring large files. I tried two different client programs and changed the port, but the speed is always the same. I know the server can transfer data a lot faster over SSL, which afaik uses AES. I configured my SSH client to use AES, too, but no effect. Why is using SSH multiple times slower than SSL and is there a way to improve transfer speed of SSH?

    Read the article

  • How to change default user (ubuntu) via CloudInit on AWS

    - by Gui Ambros
    I'm using CloudInit to automate the startup of my instances on AWS. I followed the (scarce) documentation available at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/annotate/head%3A/doc/examples/cloud-config.txt and examples on /usr/share/doc/cloud-init, but still haven't figured out how to change the default username (ubuntu, id:1000). I know I can create a script to manually delete the default ubuntu and add my user, but seems counter intuitive given that CloudInit exist exactly to automate the initial setup. Any ideas?

    Read the article

  • Why is ksoftirqd using 100% of the CPU?

    - by Yegor
    Running FC release 12. Im alaways seeing ksoftirqd/x (x being 0-9) at the top of the processlist, with 100% cpu. The server has a bonded 2gbit connection, serving files from an SSD array. Currently its using 1.6gbit. Server load is ~ 1.5 (dual quad core). iowait is non-existent.

    Read the article

  • Automatically kill processes that over time uses 95%+ of resources? Ubuntu

    - by data_jepp
    I don't know about you're computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process. This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.

    Read the article

  • C++ Simple thread with parameter (no .net)

    - by Marc Vollmer
    I've searched the internet for a while now and found different solutions but then all don't really work or are to complicated for my use. I used C++ until 2 years ago so it might be a bit rusty :D I'm currently writing a program that posts data to an URL. It only posts the data nothing else. For posting the data I use curl, but it blocks the main thread and while the first post is still running there will be a second post that should start. In the end there are about 5-6 post operations running at the same time. Now I want to push the posting with curl into another thread. One thread per post. The thread should get a string parameter with the content what to push. I'm currently stuck on this. Tried the WINAPI for windows but that crashes on reading the parameter. (the second thread is still running in my example while the main thread ended (waiting on system("pause")). It would be nice to have a multi plattform solution, because it will run under windows and linux! Heres my current code: #define CURL_STATICLIB #include <curl/curl.h> #include <curl/easy.h> #include <cstdlib> #include <iostream> #include <stdio.h> #include <stdlib.h> #include <string> #if defined(WIN32) #include <windows.h> #else //#include <pthread.h> #endif using namespace std; void post(string post) { // Function to post it to url CURL *curl; // curl object CURLcode res; // CURLcode object curl = curl_easy_init(); // init curl if(curl) { // is curl init curl_easy_setopt(curl, CURLOPT_URL, "http://10.8.27.101/api.aspx"); // set url string data = "api=" + post; // concat post data strings curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data.c_str()); // post data res = curl_easy_perform(curl); // execute curl_easy_cleanup(curl); // cleanup } else { cerr << "Failed to create curl handle!\n"; } } #if defined(WIN32) DWORD WINAPI thread(LPVOID data) { // WINAPI Thread string pData = *((string*)data); // convert LPVOID to string [THIS FAILES] post(pData); // post it with curl } #else // Linux version #endif void startThread(string data) { // FUnction to start the thread string pData = data; // some Test #if defined(WIN32) CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)thread, &pData, 0, NULL); // Start a Windows thread with winapi #else // Linux version #endif } int main(int argc, char *argv[]) { // The post data to send string postData = "test1234567890"; startThread(postData); // Start the thread system("PAUSE"); // Dont close the console window return EXIT_SUCCESS; } Has anyone a suggestion? Thanks for the help!

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • POSIX AIO Library and Callback Handlers

    - by Charles Salvia
    According to the documentation on aio_read/write, there are basically 2 ways that the AIO library can inform your application that an async file I/O operation has completed. Either 1) you can use a signal, 2) you can use a callback function I think that callback functions are vastly preferable to signals, and would probably be much easier to integrate into higher-level multi-threaded libraries. Unfortunately, the documentation for this functionality is a mess to say the least. Some sources, such as the man page for the sigevent struct, indicate that you need to set the sigev_notify data member in the sigevent struct to SIGEV_CALLBACK and then provide a function handler. Presumably, the handler is invoked in the same thread. Other documentation indicates you need to set sigev_notify to SIGEV_THREAD, which will invoke the callback handler in a newly created thread. In any case, on my Linux system (Ubuntu with a 2.6.28 kernel) SIGEV_CALLBACK doesn't seem to be defined anywhere, but SIGEV_THREAD works as advertised. Unfortunately, creating a new thread to invoke the callback handler seems really inefficient, especially if you need to invoke many handlers. It would be better to use an existing pool of threads, similar to the way most network I/O event demultiplexers work. Some versions of UNIX, such as QNX, include a SIGEV_SIGNAL_THREAD flag, which allows you to invoke handlers using a specified existing thread, but this doesn't seem to be available on Linux, nor does it seem to even be a part of the POSIX standard. So, is it possible to use the POSIX AIO library in a way that invokes user handlers in a pre-allocated background thread/threadpool, rather than creating/destroying a new thread everytime a handler is invoked?

    Read the article

  • Creating customized .dmg files upon download

    - by Marten
    I want to distribute a cross-platform application for which the executable file is slightly different, depending on the user who downloaded it. This is done by having a placeholder string somewhere in the executable that is replaced with something user-specific upon download. The webserver that has to do these string replacements is a Linux machine. For Windows, the executable is not compressed in the installer .exe, so the string replacement is easy. For uncompressed Mac OS X .dmg files, this is also easy. However, .dmg files that are compressed with either gzip or bzip2 are not so easy. For example, in the latter case, the compressed .dmg is not one big bzip2-compressed disk image, but instead consists of a few different bzip2-compressed parts (with different block sizes) and a plist suffix. Also, decompressing and recompressing the different parts with bzip2 does not result in the original data, so I'm guessing Apple uses some different parameters to bzip2 than the command-line tool. Is there a way to generate a compressed .dmg from an uncompressed one on Linux (which does not have hdiutil)? Or maybe another suggestion for creating customized applications without pregenerating them? It should work without any input by the user.

    Read the article

  • Including variables inside curly braces in a Zend config ini file on Linux

    - by Dave Morris
    I am trying to include a variable in a .ini file setting by surrounding it with curly braces, and Zend is complaining that it cannot parse it properly on Linux. It works properly on Windows, though: welcome_message = Welcome, {0}. This is the error that is being thrown on Linux: : Uncaught exception 'Zend_Config_Exception' with message 'Error parsing /var/www/html/portal/application/configs/language/messages.ini on line 10 ' in /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php:181 Stack trace: 0 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(201): Zend_Config_Ini-&gt;_parseIniFile('/var/www/html/p...') 1 /usr/local/zend/share/ZendFramework/library/Zend/Config/Ini.php(125): Zend_Config_Ini-&gt;_loadIniFile('/var/www/html/p...') 2 /var/www/html/portal/library/Ingrain/Language/Base.php(49): Zend_Config_Ini-&gt;__construct('/var/www/html/p...', NULL) 3 /var/www/html/portal/library/Ingrain/Language/Base.php(23): Ingrain_Language_Base-&gt;setConfig('messages.ini', NULL, NULL) 4 /var/www/html/portal/library/Ingrain/Language/Messages.php(7): Ingrain_Language_Base-&gt;__construct('messages.ini', NULL, NULL, NULL) 5 /var/www/html/portal/library/Ingrain/Helper/Language.php(38): Ingrain_Language_Messages-&gt;__construct() 6 /usr/local/zend/share/ZendFramework/library/Zend/Contr in We are able to get the error to go away on Linux if we surround the braces with quotes, but that seems like a strange solution: welcome_message = Welcome, "{"0"}". Is there a better way to solve this issue for all platforms? Thanks for your help, Dave

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >