Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 112/1078 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Using pipes in Linux with C

    - by Dave
    Hi, I'm doing a course in Operating Systems and we're supposed to learn how to use pipes to transfer data between processes. We were given this simple piece of code which demonstrates how to use pipes,but I'm having difficulty understanding it. #include <stdio.h> #include <stdlib.h> #include <unistd.h> main() { int pipefd [2], n; char buff[100] ; if( pipe( pipefd) < 0) { printf("can not create pipe \n"); } printf("read fd = %d, write fd = %d \n", pipefd[0], pipefd[1]); if ( write (pipefd[1],"hello world\n", 12)!= 12) { printf("pipe write error \n"); } if( ( n = read ( pipefd[0] , buff, sizeof ( buff) ) ) <= 0 ) { printf("pipe read error \n"); } write ( 1, buff, n ) ; exit (0); } What does the write function do? It seems to send data to the pipe and also print it to the screen (at least it seems like the second time the write function is called it does this). Does anyone have any suggestions of good websites for learning about topics such as this, FIFO, signals, other basic linux commands used in C?

    Read the article

  • Linux USB debug connection to LuminaryMicro evaluation board

    - by mikelong
    Hi, I am trying to connect a Stellaris LM3S8962 evaluation kit to a linux host machine. I am using the CodeSourcery G++ for the development toolchain. When I try to run a helloworld example the connection fails with this message: arm-stellaris-eabi-sprite: error: E104. I/O Error communicating with USB Device. arm-stellaris-eabi-sprite: waiting for GDB connection, to pass error along warning: Remote failure reply: E.fatal.E104. I/O Error communicating with USB Device. arm-stellaris-eabi-sprite: error: E002. Not initialized When I connect the evaluation board with the USB cable it seems the device is made available to the system: Mar 24 14:37:16 n6-ws2 kernel: usb 5-2: USB disconnect, address 5 Mar 24 14:37:18 n6-ws2 kernel: usb 5-2: new full speed USB device using uhci_hcd and address 6 Mar 24 14:37:19 n6-ws2 kernel: usb 5-2: configuration #1 chosen from 1 choice Also, it seems that I can connect in some way via the command line tool (but I do get some strange characters): [mlong@n6-ws2 bin]$ ./arm-stellaris-eabi-sprite -i CodeSourcery ARM Debug Sprite (Sourcery G++ 4.4-104) armusb: [speed=] ARMUSB device armusb:///?? - ?? (??) Does anyone have any suggestions I could try? Thanks a lot, Mike

    Read the article

  • Window message procedures in Linux vs Windows

    - by mizipzor
    In Windows when you create a window, you must define a (c++) LRESULT CALLBACK message_proc(HWND Handle, UINT Message, WPARAM WParam, LPARAM LParam); to handle all the messages sent from the OS to the window, like keypresses and such. Im looking to do some reading on how the same system works in Linux. Maybe it is because I fall a bit short on the terminology but I fail to find anything on this through google (although Im sure there must be plenty!). Is it still just one single C function that handles all the communication? Does the function definition differ on different WMs (Gnome, KDE) or is it handled on a lower level in the OS? Edit: Ive looked into tools like QT and WxWidgets, but those frameworks seems to be geared more towards developing GUI extensive applications. Im rather looking for a way to create a basic window (restrict resize, borders/decorations) for my OGL graphics and retrieve input on more than one platform. And according to my initial research, this kind of function is the only way to retrieve that input. What would be the best route? Reading up, learning and then use QT or WxWidgets? Or learning how the systems work and implement those few basic features I want myself?

    Read the article

  • Correct initialization sequence for Linux serial port

    - by whitequark
    I wrote an application that must use serial ports on Linux, especially ttyUSB ones. Reading and writing operations are performed with standard select()/read() loop and write(), and there is probably nothing wrong in them, but initialization code (or absence of some part of it) damages something in the tty subsystem. Here it is: vuxboot(string filename, unsigned baud = B115200) : _debug(false) { _fd = open(filename.c_str(), O_RDWR | O_NOCTTY); if(!_fd) throw new io_error("cannot open port"); // Serial initialization was written with FTDI USB-to-serial converters // in mind. Anyway, who wants to use non-8n1 protocol? tcgetattr(_fd, &_termios); termios tio = {0}; tio.c_iflag = IGNPAR; tio.c_oflag = 0; tio.c_cflag = baud | CLOCAL | CREAD | CS8; tio.c_lflag = 0; tcflush(_fd, TCIFLUSH); tcsetattr(_fd, TCSANOW, &tio); } Another tcsetattr(_fd, TCSANOW, &_termios) sits in the destructor, but it is irrelevant. With or without this termios initialization, strange things happen in system after the application exits. Sometimes plain cat (or hd) exits immediately printing nothing or same stuff each time, sometimes it is waiting and not displaying any of the data that is surely sent onto the port; and close() (read() too, but not every time) emits a strange WARNING to dmesg referring to usb-serial.c. I checked the hardware and firmware tens of times (even on different machines) and I am sure it is working as intended; moreover, I stripped the firmware to just print same message over and over. How can I use serial port without destroying anything? Thanks.

    Read the article

  • pthread condition variables on Linux, odd behaviour.

    - by janesconference
    Hi. I'm synchronizing reader and writer processes on Linux. I have 0 or more process (the readers) that need to sleep until they are woken up, read a resource, go back to sleep and so on. Please note I don't know how many reader processes are up at any moment. I have one process (the writer) that writes on a resource, wakes up the readers and does its business until another resource is ready (in detail, I developed a no starve reader-writers solution, but that's not important). To implement the sleep / wake up mechanism I use a Posix condition value, pthread_cond_t. The clients call a pthread_cond_wait() on the variable to sleep, while the server does a pthread_cond_broadcast() to wake them all up. As the manual says, I surround these two calls with a lock/unlock of the associated pthread mutex. The condition variable and the mutex are initialized in the server and shared between processes through a shared memory area (because I'm not working with threads, but with separate processes) an I'm sure my kernel / syscall support it (because I checked _POSIX_THREAD_PROCESS_SHARED). What happens is that the first client process sleeps and wakes up perfectly. When I start the second process, it blocks on its pthread_cond_wait() and never wakes up, even if I'm sure (by the logs) that pthread_cond_broadcast() is called. If I kill the first process, and launch another one, it works perfectly. In other words, the condition variable pthread_cond_broadcast() seems to wake up only one process a time. If more than one process wait on the very same shared condition variable, only the first one manages to wake up correctly, while the others just seem to ignore the broadcast. Why this behaviour? If I send a pthread_cond_broadcast(), every waiting process should wake up, not just one (and, however, not always the same one).

    Read the article

  • Trying to compile a linux-based app on Mac OS X

    - by Scott
    I'm just trying to compile the linux-based FCEUX (NES emulator) on my mac, OS X 10.5 Leopard. I got all the dependencies (SDL, GTK+ 2) going and everything but of all things this is now my problem: Undefined symbols: "_compress", referenced from: SaveSnapshot() in video.o "_gzclose", referenced from: FCEU_fopen(char const*, char const*, char*, char*, int, char const**)in file.o "_crc32", referenced from: CalcCRC32(unsigned int, unsigned char*, unsigned int)in crc32.o _unzReadCurrentFile in unzip.o _unzReadCurrentFile in unzip.o "_uncompress", referenced from: NetplayUpdate(unsigned char*)in netplay.o FCEUSS_LoadFP(EMUFILE*, ENUM_SSLOADPARAMS) in state.o "_compress2", referenced from: FCEUNET_SendFile(unsigned char, char*)in netplay.o FCEUSS_SaveMS(EMUFILE*, int) in state.o "_inflateEnd", referenced from: _unzCloseCurrentFile in unzip.o "_inflate", referenced from: _unzReadCurrentFile in unzip.o "inflateInit2", referenced from: _unzOpenCurrentFile in unzip.o "_gzgetc", referenced from: FCEU_fopen(char const*, char const*, char*, char*, int, char const**)in file.o "_gzopen", referenced from: FCEU_fopen(char const*, char const*, char*, char*, int, char const**)in file.o "_gzread", referenced from: FCEU_fopen(char const*, char const*, char*, char*, int, char const**)in file.o "_gzseek", referenced from: FCEU_fopen(char const*, char const*, char*, char*, int, char const*)in file.o ld: symbol(s) not found collect2: ld returned 1 exit status scons: ** [src/fceux] Error 1 scons: building terminated because of errors. Those are zlib functions. It seems like it is loading the zlib.h ok, but the symbols aren't being linked in? Just to make sure I downloaded the latest zlib and did a make install, no help. I have no clue what's going on here, it seems like it should be pretty basic, that library is nothing special. Help would be appreciated. Thanks.

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • How to change a Linux user password from python

    - by Vaulor
    I'm having problems with changing a Linux user's password from python. I've tried so many things, but I couldn't manage to solve the issue, here is the sample of things I've already tried: sudo_password is the password for sudo, sudo_command is the command I want the system to run, user is get from a List and is the user who I want to change the password for, and newpass is the pass I want to assign to 'user' user = list.get(ANCHOR) sudo_command = 'passwd' f = open("passwordusu.tmp", "w") f.write("%s\n%s" % (newpass, newpass)) f.close() A=os.system('echo -e %s|sudo -S %s < %s %s' % (sudo_password, sudo_command,'passwordusu.tmp', user)) print A windowpass.destroy() 'A' is the return value for the execution of os.system, in this case 256. I tried also A=os.system('echo %s|sudo -S %s < %s %s' % (sudo_password, sudo_command,'passwordusu.tmp', user)) but it returns the same error code. I tried several other ways with 'passwd' command, but whithout succes. With 'chpasswd' command I 've tried this: user = list.get(ANCHOR) sudo_command = 'chpasswd' f = open("passwordusu.tmp", "w") f.write("%s:%s" % (user, newpass)) f.close() A=os.system('echo %s|sudo -S %s < %s %s' % (sudo_password, sudo_command,'passwordusu.tmp', user)) print A windowpass.destroy() also with: A=os.system('echo %s|sudo -S %s:%s|%s' % (sudo_password, user, newpass, sudo_command)) @;which returns 32512 A=os.system("echo %s | sudo -S %s < \"%s\"" % (sudo_password, sudo_command, "passwordusu.tmp")) @;which returns 256 I tried 'mkpasswd' and 'usermod' too like this: user = list.get(ANCHOR) sudo_command = 'mkpasswd -m sha-512' os.system("echo %s | sudo -S %s %s > passwd.tmp" % (sudo_password,sudo_command, newpass)) sudo_command="usermod -p" f = open('passwd.tmp', 'r') for line in f.readlines(): newpassencryp=line f.close() A=os.system("echo %s | sudo -S %s %s %s" % (sudo_password, sudo_command, newpassencryp, user)) @;which returns 32512 but, if you go to https://www.mkpasswd.net , hash the 'newpass' and substitute for 'newpassencryp', it returns 0 which theoretically means it has gone right, but so far it doesn't changes the password. I've searched on internet and stackoverflow for this issue or similar and tried what solutions exposed, but again,without success. I would really apreciate any help, and of course, if you need more info i'll be glad to supply it! Thanks in advance.

    Read the article

  • linux raw socket programming question

    - by user194420
    Hi all, I am trying to create a raw socket which send and receive message with ip/tcp header under linux. I can successfully binds to a port and receive tcp message(ie:syn) However, the message seems to be handled by the os, but not mine. I am just a reader of it(like wireshark). My raw socket binds to port 8888, and then i try to telnet to that port . In wireshark, it shows that the port 8888 reply a "rst ack" when it receive the "syn" request. In my program, it shows that it receive a new message and it doesnot reply with any message. Any way to actually binds to that port?(prevent os handle it) Here is part of my code, i try to cut those error checking for easy reading sockfd = socket(AF_INET, SOCK_RAW, IPPROTO_TCP); int tmp = 1; const int *val = &tmp; setsockopt (sockfd, IPPROTO_IP, IP_HDRINCL, val, sizeof (tmp)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8888); bind(sockfd, (struct sockaddr*)&servaddr, sizeof(servaddr)); //call recv in loop

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part IV

    - by Ted Davis
    Welcome to the final Oracle Linux Partner Pavilion Spotlight Part IV.  Two days left till the Big Show. You are gearing up. We are gearing up. You can feel the excitement.  We can feel the excitement. This. Will. Be. The. Best. Show. EVER. See you at the Partner Pavilion (Moscone south # 1033) at Oracle OpenWorld. - Oracle Linux / Oracle VM Team HP and Oracle are pleased to announce another Oracle Validated Configuration based on the ProLiant DL980 server. Many choose to deploy Oracle workloads on the ProLiant DL980 based on the cost/performance ratio they achieve running Oracle Linux Unbreakable Enterprise Kernel. You can be confident that Oracle Validated Configurations based on ProLiant servers will help you achieve your most demanding performance goals. QLogic The QLogic-Oracle partnership spans over 20 years resulting in the most comprehensive line of Oracle Linux I/O adapter technology. Interface options include Ethernet, Fibre-Channel, and FCoE. Host side connectivity is offered in both low profile PCIe and Express Module PCIe form factors. QLogic software drives are jointly qualified and “in-box” with Oracle Linux 5.x, 6,x and Oracle VM enabling simplified installation and management while simultaneously taking risk out of the solution. Bringing innovations such as NPIV, T10-PI, and intelligent caching adapter technology to the Oracle Linux environment further strengthens the QLogic advantage. A big thank you to all of our Oracle Linux Partner Pavilion participants. We - they- look forward to meeting you next week at Oracle OpenWorld. If you've missed our three previous Partner Spotlight's - here are the links: Part I, Part II, Part III. 

    Read the article

  • Linux network stack : adding protocols with an LKM and dev_add_pack

    - by agent0range
    Hello, I have recently been trying to familiarize myself with the Linux Networking stack and device drivers (have both similarly named O'Reilly books) with the eventual goal of offloading UDP. I have already implemented UDP on the NIC but now the hard part... Rather than ask for assistance on this larger goal I was hoping someone could clarify for me a particular snippet I found that is part of a LKM which registeres a new protocol (OTP) that acts as a filter between the device driver and network stack. http://www.phrack.org/archives/55/p55_0x0c_Building%20Into%20The%20Linux%20Network%20Layer_by_lifeline%20&%20kossak.txt (Note: this Phrack article contains three different modules, code for the OTP is at the bottom of the page) In the init function of his example he has: otp_proto.type = htons(ETH_P_ALL); otp_proto.func = otp_func; dev_add_pack(&otp_proto); which (if I understand correctly) should register otp_proto as a packet sniffer and put it into the ptype_all data structure. My question is about the dev_add_pack. Is it the case that the protocol being registered as a filter will always be placed at this layer between L2 and the device driver? Or, for instance could I make such a filtering occur between the application and transport layers (analyze socket parameters) using the same process? I apologize if this is confusing - I am having some trouble wrapping my head around the bigger picture when it comes to modules altering kernel stack functionality. Thanks

    Read the article

  • How switch between screen inside screen?

    - by André Andrade
    I have to work inside two environment. One Windows (local) and one Linux (remote). I've installed the screen linux utility in both. I'm able to open a screen on my windows, then in one tab, I opened a ssh connection to the linux remote and I start another screen. Sample linux -- |0 linux remote 0| 1 linux remote 1 windows-- |0 linux | 9 windows I can switch between "linux remote 0" and "linux remote 1" using Atl+. This is configured in .screenrc (bindkey "^[0" select 0) How could I switch to "9 windows"?

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • cant open device

    - by yoavstr
    I have a little problem. I install this module into my kernel and its written under /proc When I try to open() it from user mode I get the following message: "Can't open device file: my_dev" static int module_permission(struct inode *inode, int op, struct nameidata *foo) { //if its write if ((op == 2)&&(writer == DOESNT_EXIST)){ writer = EXIST ; return 0; } //if its read if (op == 4 ){ numOfReaders++; return 0; } return -EACCES; } int procfs_open(struct inode *inode, struct file *file) { try_module_get(THIS_MODULE); return 0; } static struct file_operations File_Ops_4_Our_Proc_File = { .read = procfs_read, .write = procfs_write, .open = procfs_open, .release = procfs_close, }; static struct inode_operations Inode_Ops_4_Our_Proc_File = { .permission = module_permission, /* check for permissions */ }; int init_module() { /* create the /proc file */ Our_Proc_File = create_proc_entry(PROC_ENTRY_FILENAME, 0644, NULL); /* check if the /proc file was created successfuly */ if (Our_Proc_File == NULL){ printk(KERN_ALERT "Error: Could not initialize /proc/%s\n", PROC_ENTRY_FILENAME); return -ENOMEM; } Our_Proc_File->owner = THIS_MODULE; Our_Proc_File->proc_iops = &Inode_Ops_4_Our_Proc_File; Our_Proc_File->proc_fops = &File_Ops_4_Our_Proc_File; Our_Proc_File->mode = S_IFREG | S_IRUGO | S_IWUSR; Our_Proc_File->uid = 0; Our_Proc_File->gid = 0; Our_Proc_File->size = 80; //i added init the writewr status writer = DOESNT_EXIST; numOfReaders = 0 ; printk(KERN_INFO "/proc/%s created\n", PROC_ENTRY_FILENAME); return 0; }

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Capabilities & Linux & Java

    - by Marek Jelen
    Hi, I am experimenting with linux capabilities for java application ... I do not want to add capabilities to interpreter (JVM), so I tried to write simple wrapper (with debugging information printed to stdout): #include <stdio.h> #include <stdlib.h> #include <sys/capability.h> #include <unistd.h> int main(int argc, char *argv[]){ cap_t cap = cap_get_proc(); if (!cap) { perror("cap_get_proc"); exit(1); } printf("%s: running with caps %s\n", argv[0], cap_to_text(cap, NULL)); return execlp("/usr/bin/java", "-server", "-jar", "project.jar", (char *)NULL); } This way, I can se that the capability is set for this execucatable: ./runner: running with caps = cap_net_bind_service+p And getcap shows runner = cap_net_bind_service+ip I have the capability set to be inheritable, so there should be no problem. However java still don't want to bind to privileged ports :-( I am getting this error: sun/nio/ch/Net.java:-2:in `bind': java.net.SocketException: Permission denied (NativeException) Can someone help me to resolve this? Thanks in advance

    Read the article

  • Kernel module for /proc

    - by sb2367
    How to write a kernel module that creates a directory in /proc named mymod and a file in it name is mymodfile. This file should accept a number ranged from 1 to 3 when written into it and return the following messages when read based on the number already written into it: • 1: Current system time (in microseconds precision) • 2: System uptime • 3: Number of processes currently in the system “The Output” root@Paradise# echo 1 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile 08:30:24 342us root@Paradise# echo 2 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile up 1 day, 8 min root@Paradise# echo 3 > /proc/mymod/mymodfile root@Paradise# cat /proc/mymod/mymodfile process count: 48 please give me some hint how to write a kernel modules and how to compile and install it .

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >