Search Results

Search found 7936 results on 318 pages for 'kernel modules'.

Page 63/318 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How do you extend a Ruby module with macro-like metaprogramming methods?

    - by Ian Terrell
    Consider the following extension (the pattern popularized by several Rails plugins over the years): module Extension def self.included(recipient) recipient.extend ClassMethods recipient.class_eval { include InstanceMethods } end module ClassMethods def macro_method puts "Called macro_method within #{self.name}" end end module InstanceMethods def instance_method puts "Called instance_method within #{self.object_id}" end end end If you wished to expose this to every class, you can do the following: Object.send :include, Extension Now you can define any class and use the macro method: class FooClass macro_method end #=> Called macro_method within FooClass And instances can use the instance methods: FooClass.new.instance_method #=> Called instance_method within 2148182320 But even though Module.is_a?(Object), you cannot use the macro method in a module: module FooModule macro_method end #=> undefined local variable or method `macro_method' for FooModule:Module (NameError) This is true even if you explicitly include the original Extension into Module with Module.send(:include, Extension). How do you add macro like methods to Ruby modules?

    Read the article

  • How to view the GDTR's value ?

    - by Mehdi Asgari
    Hi In the book "Rootkit Arsenal" page 84 (Chapter 3) mentions: ..., we can view the contents of the target machine's descriptor registers using the command with the 0x100 mask: kd rM 0x100 and a paragraph below: Note that the same task can be accomplished by specifying the GDTR components explicitly: kd r gdtr .... I run Windbg on my Win XP (inside VMWare) and choose the Kernel Debug - Local. My problem is in case of first command, windbg errors with: lkd rM 0x100 ^ Operation not supported in current debug session 'rM 0x100' and in the second command: lkd r gdtr ^ Bad register error in 'r gdtr' Can anyone guide me ?

    Read the article

  • Joomla - Warning! Failed to move file error

    - by Sixfoot Studio
    Hi Guys, I have found some solutions to this error and tried implementing them but none of which has worked and hope that some here at SO might have a different answer. I get this error, "Warning! Failed to move file" when I try install modules into my new installation of Joomla here: http://sun-eng.sixfoot.co.za Here's some solutions I have tried to no avail: http://forum.joomla.org/viewtopic.php?f=199&t=223206 http://www.saibharadwaj.com/blog/2008/03/warning-failed-to-move-file-joomla-10x-joomla-15x/ Anyone know of another solution to this please? Thanks!

    Read the article

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Using more recent kernel for Xen Dom0 in production.

    - by thelsdj
    Does anyone have experience running Xen dom0 on a more recent kernel than the stock 2.6.18? What host distro are you running? What release of Xen (or hg/git changeset)? What set of patches are you using on kernel source? (Has anyone got the pvops dom0 stuff working in production or is it better to stick with something like the SUSE patches? Any other tips and tricks to running a more recent kernel version as dom0 would be helpful.

    Read the article

  • How to disable 3G USB Modem internal storage from being loaded by linux kernel?

    - by Krystian
    Hi, I've got a problem with my 3G modem [Huawei E122]. It has internal storage and kernel assigns a device [/dev/sdX] to it. Because of that, every second time my machine will not boot - kernel panic - as my usb hdd gets assigned /dev/sdb instead of /dev/sda. I cannot use LABEL nor UUID in root= kernel parameter, as it is only available when using initrd, and I can't use it - I am using Debian on my router - mips architecture machine. I have to prevent this from happening, as my router has to start everyday and I have to be sure it works ok. I don't have physical access to restart it when something goes wrong. I don't use my modem internal storage, there's no SD card inserted. However kernel detects the reader and loads it. I can not prevent loading od usb drivers since my hdd is on USB as well. I will appreciate any ideas.

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Contributing to a Linux distribution

    - by Big Al
    I'm interested in contributing to a Linux distro, but regarding the various distro's developer communities, I'm having a bit of trouble figuring out which one I'd most like to join. What languages I know: C, C++, Lua, Python, and fairly familiar with Perl (though I wouldn't say I "know" it). In particular, I have very little experience with x86 assembly besides hacking stuff together for performance tweaks, though that will be partially rectified soon. What I'm looking for: A community that provides plenty of opportunities for developers to work on various aspects of the distribution. To be honest I'm most interested in reading and working on the kernel source (in which case the distro doesn't matter), but it's pretty daunting and I figure getting into the Linux community and working with experienced Linux developers might give me a better idea of how to jump into the guts(let me know if this is bogus, or if you have any advice regarding that). So... Which distro has the "best" developer community in terms of organization, people who are fun to work with, and opportunities to contribute? I've read various "Contributing to XXX" pages and mailing lists for distros like Ubuntu, OpenSuse, Fedora, etc. but I'd rather get a more personal testament from an actual developer.

    Read the article

  • Linux System Programming

    - by AJ
    I wanted to get into systems programming for linux and wanted to know how to approach that and where to begin. I come from a web development background (Python, PHP) but I also know some C and C++. Essentially, I would like to know: Which language(s) to learn and pursue (I think mainly C and C++)? How/Where to learn those languages specific to Systems Programming? Books, websites, blogs, tutorials etc. Any other good places where I can start this from basics? Any good libraries to begin with? What environment setup (or approx.) do I need? Assuming linux has to be there but I have a linux box which I rarely log into using GUI (always use SSH). Is GUI a lot more helpful or VI editor is enough? (Please let me know if this part of the question should go to serverfault.com) PS: Just to clarify, by systems programming I mean things like writing device drivers, System tools, write native applications which are not present on Linux platform but are on others, play with linux kernel etc.

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • What is the best way to save the environment from before an alarm handler goes off when the alarm do

    - by EpsilonVector
    I'm trying to implement user threads on a 2.4 Linux kernel (homework) and the trick for context switch seems to be using an alarm that goes off every x milliseconds and sends us to an alarm handler from which we can longjmp to the next thread. What I'm having difficulties with is figuring out how to save the environment to return to later. Basically I have an array of jmp_buffs, and every time a "context switch" using the alarm happens I want to save the previous context to the appropriate entry of the array and longjmp to the next one. However, just the fact that I need to do this from the event handler means that just using setjmp in the event handler won't give me exactly the kind of environment I want (as far as stack and program counter are involved) because the stack has the event handler call in it and the pc is in the event handler. I suppose I can look at the stack and alter it to fit my needs, but that feels a bit cumbersome. Another idea I had is to somehow pass the environment before the jump to event handler as a parameter to the event handler, but I can't figure out if this is possible. So I guess my question is- how do I do this right?

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Using sigprocmask to implement locks

    - by EpsilonVector
    I'm implementing user threads in Linux kernel 2.4, and I'm using ualarm to invoke context switches between the threads. We have a requirement that our thread library's functions should be uninterruptable, so I looked into blocking signals and learned that using sigprocmask is the standard way to do this. However, it looks like I need to do quite a lot to implement this: sigset_t new_set, old_set; sigemptyset(&new_set); sigaddset(&new_set, SIGALRM); sigprocmask(SIG_BLOCK, &new_set, &old_set); This blocks SIGALARM but it does this with 3 function invocations! A lot can happen in the time it takes for these functions to run, including the signal being sent. The best idea I had to mitigate this was temporarily disabling ualarm, like this: sigset_t new_set, old_set; time=ualarm(0,0); sigemptyset(&new_set); sigaddset(&new_set, SIGALRM); sigprocmask(SIG_BLOCK, &new_set, &old_set); ualarm(time, 0); Which is fine except that this feels verbose. Isn't there a better way to do this?

    Read the article

  • cant open device

    - by yoavstr
    I have a little problem. I install this module into my kernel and its written under /proc When I try to open() it from user mode I get the following message: "Can't open device file: my_dev" static int module_permission(struct inode *inode, int op, struct nameidata *foo) { //if its write if ((op == 2)&&(writer == DOESNT_EXIST)){ writer = EXIST ; return 0; } //if its read if (op == 4 ){ numOfReaders++; return 0; } return -EACCES; } int procfs_open(struct inode *inode, struct file *file) { try_module_get(THIS_MODULE); return 0; } static struct file_operations File_Ops_4_Our_Proc_File = { .read = procfs_read, .write = procfs_write, .open = procfs_open, .release = procfs_close, }; static struct inode_operations Inode_Ops_4_Our_Proc_File = { .permission = module_permission, /* check for permissions */ }; int init_module() { /* create the /proc file */ Our_Proc_File = create_proc_entry(PROC_ENTRY_FILENAME, 0644, NULL); /* check if the /proc file was created successfuly */ if (Our_Proc_File == NULL){ printk(KERN_ALERT "Error: Could not initialize /proc/%s\n", PROC_ENTRY_FILENAME); return -ENOMEM; } Our_Proc_File->owner = THIS_MODULE; Our_Proc_File->proc_iops = &Inode_Ops_4_Our_Proc_File; Our_Proc_File->proc_fops = &File_Ops_4_Our_Proc_File; Our_Proc_File->mode = S_IFREG | S_IRUGO | S_IWUSR; Our_Proc_File->uid = 0; Our_Proc_File->gid = 0; Our_Proc_File->size = 80; //i added init the writewr status writer = DOESNT_EXIST; numOfReaders = 0 ; printk(KERN_INFO "/proc/%s created\n", PROC_ENTRY_FILENAME); return 0; }

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • Catching the return of main function before it deallocates resources

    - by EpsilonVector
    I'm trying to implement user threads in Linux kernel 2.4, and I ran into something problematic and unexpected. Background: a thread basically executes a single function and dies, except that when I call thread_create for the first time it must turn main() into a thread as well (by default it is not a thread until the first call, which is also when all the related data structures are allocated). Since a thread executes a function and dies, we don't need to "return" anywhere with it, but we do need to save the return value to be reclaimed later with thread_join, so the hack I came up with was: when I allocate the thread stack I place a return address that points to a thread_return_handler function, which deallocates the thread, makes it a zombie, and saves its return value for later. This works for "just run a function and die" threads, but is very problematic with the main thread. Since it actually is the main function, if it returns before the other threads finish the normal return mechanism kicks in, and deallocates all the shared resources, thus screwing up all the running threads. I need to keep it from doing that. Any ideas on how it can be done?

    Read the article

  • Pass Arguments to Included Module in Ruby?

    - by viatropos
    I'm hoping to implement something like all of the great plugins out there for ruby, so that you can do this: acts_as_commentable has_attached_file :avatar But I have one constraint: That helper method can only include a module; it can't define any variables or methods. The reason for this is because, I want the options hash to define something like type, and that could be converted into one of say 20 different 'workhorse' modules, all of which I could sum up in a line like this: def dynamic_method(options = {}) include ("My::Helpers::#{options[:type].to_s.camelize}").constantize(options) end Then those 'workhorses' would handle the options, doing things like: has_many "#{options[:something]}" Here's what the structure looks like, and I'm wondering if you know the missing piece in the puzzle: # 1 - The workhorse, encapsuling all dynamic variables module My::Module def self.included(base) base.extend ClassMethods base.class_eval do include InstanceMethods end end module InstanceMethods self.instance_eval %Q? def #{options[:my_method]} "world!" end ? end module ClassMethods end end # 2 - all this does is define that helper method module HelperModule def self.included(base) base.extend(ClassMethods) end module ClassMethods def dynamic_method(options = {}) # don't know how to get options through! include My::Module(options) end end end # 3 - send it to active_record ActiveRecord::Base.send(:include, HelperModule) # 4 - what it looks like class TestClass < ActiveRecord::Base dynamic_method :my_method => "hello" end puts TestClass.new.hello #=> "world!" That %Q? I'm not totally sure how to use, but I'm basically just wanting to somehow be able to pass the options hash from that helper method into the workhorse module. Is that possible? That way, the workhorse module could define all sorts of functionality, but I could name the variables whatever I wanted at runtime.

    Read the article

  • Zend Framework - How do make a hierarchy without it being a module?

    - by Josh
    Here is my specific issue. I want to make an api level which then under that you can select which method you will use. For example: test.com/api/rest test.com/api/xmlprc Currently I have api mapping to a module directory. I then setup a route to make it a rest route. test.com/api is a rest route, but I would rather have it be test.com/api/rest. This would allow me later to add others. In Bootstrap.php: $front = Zend_Controller_Front::getInstance(); $router = $front->getRouter(); $route = new Zend_Controller_Router_Route( 'api/:module/:controller/:id/*', array('module' =>'default') ); $router-addRoute('api', $route); $restRoute = new Zend_Rest_Route($front, array(), array( 'rest' )); $router-addRoute('rest', $restRoute); return $router; In application.ini: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" I know it will involve routes, but sometimes I find the Zend Framework documentation to be a little hard to follow. When I go to test.com/rest/controller/ it works how it should, but if I go to test.com/api/rest/ it tells me that my module is api and controller is rest.

    Read the article

  • Developing Modular Flex Applications

    - by ukdavo
    Hi there I'd like to be able to understand how to develop a Flex application such that I could provide implementation classes at runtime. In the Java world I'd specify interfaces in an JAR (e.g. myapp-api.jar), the implementation in a separate JAR (e.g. myapp-impl.jar) and package these along with other resources in the application WAR (e.g. myapp.war). Within the code of the application I would instantiate the implementation classes dynamically. Is this approach possible in Flex? I'm aware that I can instantiate classes dynamically so that's a good start. I'm a bit confused by modules, RSLs and SWCs though. I was hoping to create a SWF application that had references to an interfaces SWC and an implementation SWC. The idea is that if I need to tweak the application for a specific customer then I could create a new implementation SWC and not have to modify the SWF or interface SWC. Any ideas?

    Read the article

  • Kernel Linux : la version stable 2.6.38 est disponible, elle optimise la fonction de résolution de la couche VSF

    Kernel Linux : la version stable 2.6.38 est disponible Elle optimise la fonction de résolution de la couche VSF La version stable 2.6.38 du noyau Linux vient d'être rendue disponible et annoncée officiellement par Linus Torvalds. Cette version apporte des améliorations importantes au niveau des performances. Le Kernel intègre désormais le support transparent des « huge pages » (TPH) qui permet d'obtenir de meilleures performances sur des charges de travail qui nécessitent beaucoup de mémoire (on pense aux serveurs JVM et serveurs de base de données). TPH utilise des pages mémoires de grandes tailles (2 Mb) par opposition aux pages traditionnels de 4 Ko. Une autre nouveauté est l...

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >