Search Results

Search found 4448 results on 178 pages for 'kernel'.

Page 142/178 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • TMS320C64x Quick start reference for porgrammers

    - by osgx
    Hello Is thare any quickstart guide for programmers for writing DSP-accelerated appliations for TMS320C64x? I have a program with custom algorythm (not the fft, or usial filtering) and I want to accelerate it using multi-DSP coprocessor. So, how should I modify source to move computation from main CPU to DSPs? What limitations are there for DSP-running code? I have some experience with CUDA. In CUDA I should mark every function as being host, device, or entry point for device (kernel). There are also functions to start kernels and to upload/download data to/from GPU. There are also some limitations, for device code, described in CUDA Reference manual. I hope, there is an similar interface and a documentation for DSP.

    Read the article

  • Low level qemu based debugging

    - by Dacav
    I've to test some low level code on an ARM architecture. Typically experimentation is quite complicated on the real board, so I was thinking about QEMU. What I'd like to get is some kind of debugging information like printfs or gdb. I know that this is simple with linux since it implements both the device driver for the QEMU Integrator and the gdb feature, but I'm not working with Linux. Also I suspect that extracting this kind of functionality from the Linux kernel source code would be complicated. I'm searching from some simple operating system that already implements one of those features. Do you have some advice? Thanks in advance.

    Read the article

  • Configurable ruby logger setup: Logger.new().level = variable

    - by Daniel
    Hi, I want to change the logging level of an application (ruby). require 'logger' config = { :level => 'Logger::WARN' } log = Logger.new STDOUT log.level = Kernel.const_get config[:level] Well, the irb wasn't happy with that and threw "NameError: wrong constant name Logger::WARN" in my face. Ugh! I was insulted. I could do this in a case/when to solve this, or do log.level = 1, but there must be a more elegant way! Does anyone have any ideas? -daniel

    Read the article

  • Boost Thread Specific Storage Question (boost/thread/tss.hpp)

    - by Hassan Syed
    The boost threading library has an abstraction for thread specific (local) storage. I have skimmed over the source code and it seems that the TSS functionality can be used in an application with any existing thread regardless of weather it was created from boost::thread --i.e., this implies that certain callbacks are registered with the kernel to hook in a callback function that may call the destructor of any TSS objects when the thread or process is going out of scope. I have found these callbacks. I need to cache HMAC_CTX's from OpenSSL inside the worker threads of various web-servers (see this, detailed, question for what I am trying to do), and as such I do not controll the life-time of the thread -- the web-server does. Therefore I will use the TSS functionality on threads not created by boost::thread. I just wanted to validate my assumptions before I started implementing the caching logic, are there any flaws in my logic ?

    Read the article

  • Technologies used in Remote Administration applications(not RD)

    - by Michael
    I want to know what kind of technologies are used nowadays as underlying screen capture engine for remote administration software like VNC pcAnywhere TeamViewer RAC Remote Administrator etc.. The programming language is not so important as just to know whether a driver needs to be developed which is polling video memory 30 times per second or there are any com objects built in the Windows kernel to help doing this? I'm not interested in 3rd party components for doing this. Do I have to use DirectX facilities? Just want some start point to develop my own screen stream capture engine, which will be less CPU hog.

    Read the article

  • LaTeX at symbol

    - by secondbanana
    What does the @ symbol mean in LaTeX? I'm looking at the source of apa.cls, and there's a declaration: \newsavebox\gr@box and later on \sbox\gr@box{\includegraphics[width=\linewidth]{#2}}. It seems like @ isn't acting as a normal character, but I can't figure out exactly what it's doing, and couldn't find anything after bit of googling (how I would love a Google regex feature!) Thanks. EDIT: Thanks for the help; of the links I looked through I found http://www.tug.org/pipermail/tugindia/2002-January/000178.html to be very helpful and concise. To summarize, the @ character is not normally allowed in the names of macros, so as a hack for scoping, LaTeX packages declare it internally to be a valid name character and use it for their macros. You can use \makeatletter in a document to access these macros, but you obviously must be very careful since you have can now overwrite essential LaTeX kernel macros; use \makeatother to revert.

    Read the article

  • PS using Get-WinEvent with FilterXPath and datetime variables?

    - by Jordan W.
    I'm grabbing a handful of events from an event log in chronological order don't want to pipe to Where want to use get-winevent After I get the Event1, I need to get the 1st instance of another event that occurs some unknown amount of time after Event1. then grab Event3 that occurs sometime after Event2 etc. Basically starting with: $filterXML = @' <QueryList> <Query Id="0" Path="System"> <Select Path="System">*[System[Provider[@Name='Microsoft-Windows-Kernel-General'] and (Level=4 or Level=0) and (EventID=12)]]</Select> </Query> </QueryList> '@ $event1=(Get-WinEvent -ComputerName $PCname -MaxEvents 1 -FilterXml $filterXML).timecreated Give me the datetime of Event1. Then I want to do something like: Get-WinEvent -LogName "System" -MaxEvents 1 -FilterXPath "*[EventData[Data = 'Windows Management Instrumentation' and TimeCreated -gt $event1]]" Obviously the timecreated part bolded there doesn't work but I hope you get what I'm trying to do. any help?

    Read the article

  • Compile rt73 driver (serialmonkey) on Ubuntu

    - by Curro
    Hello. I'm trying to compile some WiFi drivers I downloaded from SerialMonkey: http://rt2x00.serialmonkey.com/wiki/index.php/Downloads It's about a driver for a Ralink rt73 chipset. I already got the build-essentials and the kernel-headers on my computer. When I try to compile the driver using the "make" command I get this error: root@curro-ubuntu:/usr/src/rt73-cvs-2009041204/Module# sudo make make[1]: Entering directory `/usr/src/linux-headers-2.6.31-20-generic' Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.31-20-generic' rt73.ko failed to build! make: *** [module] Error 1 I've looked over the Internet and haven't found anything yet, I don't know why am I getting this error. I have Ubuntu 9.10 installed on this computer. I've seen some other persons having the same issue Any help would be greatly appreciated. Thanks in advance

    Read the article

  • SVM in OpenCV: Visual Studio 2008 reported error wrongly (or is it right?)

    - by Risa
    I'm using MS Visual Studio 2008, OpenCV, C++ and SVM for a OCR-related project. At least I can run the code until yesterday, when I open the project to continue working, VS reported this error: error C2664: 'bool CvSVM::train(const CvMat *,const CvMat *,const CvMat *,const CvMat *,CvSVMParams)' : cannot convert parameter 1 from 'cv::Mat' to 'const CvMat *' It didn't happen before and I haven't changed any code relating to it (I only changed the parameters for the kernel). The code got error is: Mat curTrainData, curTrainLabel; CvSVM svm; . . . svm.train(curTrainData, curTrainLabel, Mat(), Mat(), params); If I hover on the code, I still got this tip: image. Which means my syntax isn't wrong. So why do VS bother to report such an error?

    Read the article

  • Options for Linux OS executable archive files - self installers

    - by Matt1776
    I am looking to create a web-project that is able to install with a program. The user should be able to download an archive file or tar file, run it (executable), and the setup script would ask for paths and configurable values and then unpack its 'payload' and sorting out the contents for deployment. This would be a Linux version of the MSI installer. Is there such a thing for Linux operating systems? This does not involve kernel level manipulations. All it needs to do is copy directories and files on the filesystem, which should cover about 80% if not more of all the *nix distributions.

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Broadcast-style Bluetooth using Sockets on the iPhone?

    - by Kyle
    Is there any way to open a broadcast bluetooth socket, take a listen and send replies? I want a proper peer to peer system where I broadcast and listen for broadcasts in an area. That way, variable clients can mingle. Is this possible? My theory is this: If GameKit can sit around wasting 25 seconds of the users time whilst having access to a broadcast socket, can't I? Or, must I be in kernel mode for such access? I'm not really sure where the proper bluetooth headers are as well. Thanks for reading!

    Read the article

  • The speed of .NET in numerical computing

    - by Yin Zhu
    In my experience, .net is 2 to 3 times slower than native code. (I implemented L-BFGS for multivariate optimization). I have traced the ads on stackoverflow to http://www.centerspace.net/products/ the speed is really amazing, the speed is close to native code. How can they do that? They said that: Q. Is NMath "pure" .NET? A. The answer depends somewhat on your definition of "pure .NET". NMath is written in C#, plus a small Managed C++ layer. For better performance of basic linear algebra operations, however, NMath does rely on the native Intel Math Kernel Library (included with NMath). But there are no COM components, no DLLs--just .NET assemblies. Also, all memory allocated in the Managed C++ layer and used by native code is allocated from the managed heap. Can someone explain more to me? Thanks!

    Read the article

  • How do I program an AVR Raven with Linux or a Mac?

    - by Andrew McGregor
    This tutorial for programming these starts with programming the Ravens and Jackdaw with a Windows box. Can I do those initial steps with avrdude on a Linux or OS X machine instead? If so, how? Is there any risk of bricking the hardware if I just try? I have a USB JTAG ICE MKii clone, which is supposed to work for this. I'm totally new to AVR, but very experienced with C/C++ programming on Linux or OS X, up to and including kernel programming... so any hint at all would be appreciated, I can read man pages, but only if I know what I'm looking for.

    Read the article

  • Ninject with MembershipProvider | RoleProvider

    - by DVark
    I'm using ninject as my IoC and I wrote a role provider as follows: public class BasicRoleProvider : RoleProvider { private IAuthenticationService authenticationService; public BasicRoleProvider(IAuthenticationService authenticationService) { if (authenticationService == null) throw new ArgumentNullException("authenticationService"); this.authenticationService = authenticationService; } /* Other methods here */ } I read that Provider classes get instantiated before ninject gets to inject the instance. How do I go around this? I currently have this ninject code: Bind<RoleProvider>().To<BasicRoleProvider>().InRequestScope(); From this answer here. If you mark your dependencies with [Inject] for your properties in your provider class, you can call kernel.Inject(MemberShip.Provider) - this will assign all dependencies to your properties. I do not understand this.

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after stroing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Sniffing LPT Traffic

    - by ArcherT
    I need to intercept LPT output traffic. After a couple of hours of research, I've come to understand that the only way to do this is by writing a kernel-mode driver, more precisely a "filter driver"...? I've downloaded the WDK, but the terminology and vast number of driver types is a little overwhelming. I'm basically trying to understand what kind of driver I should be writing; my target environment is Windows XP SP2 and 3 only. Some background info, if it matters: I have a bunch of legacy DOS apps that print to LPT1. I'd like to be able to capture this output and redirect this data (after GDI calls) to a modern USB (network) printer. Fortunately, the latter part of the problem's easy. I'm hoping someone could point me in the right direction. TIA.

    Read the article

  • OpenMP + SSE gives no speedup

    - by Sayan Ghosh
    Hi, My Professor found out this interesting experiment of 3D Linearly separable Kernel Convolution using SSE and OpenMP, and gave the task to me to benchmark the statistics on our system. The author claims a crazy 18 fold speedup from the serial approach! Might not be always, but we were expecting at least a 2-4 times speedup running this on a Dual Core Intel. http://software.intel.com/en-us/articles/16bit-3d-convolution-sse4openmp-implementation-on-penryn-cpu/#comment-41994 Alas, we could find exactly no speedup. The serial code performs always better, with or without OpenMP. I am using Linux, and observed a certain trend...when no other processes are running on the system, after a while the loadavg starts increasing, and the the %CPU utilization falls down. Another probable false positive which I ran into accidentally...I started the program, then immediately paused it. Then I ran it on background with bg, and saw a speedup of more than 2. This happens all the time! Any advice would be great. Thanks, Sayan

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Consuming a USB HID device in Windows CE 6.0 using c#

    - by kersny
    I am working on an Embedded Windows CE project and am interested in accessing a USB HID device through one of its USB Host ports. All I really need to read are the raw HID spec packets. On a windows computer, I have a working program using hid.dll, but as far as I have researched, there is no equivalent on CE. I know there is the usbhid.dll, but I'm not sure if it is applicable for this situation. I would prefer not to write a kernel level driver, as I would like to do my coding in c#. Has anyone had experience consuming an HID device on Windows CE?

    Read the article

  • Oracle Application Server 10.1.3.5 Security issue.

    - by Marius Bogdan IONESCU
    Hello! we are tying to port a J2EE app from OAS 9.0.4 (working perfectly) on OAS 10.1.3.5 the reson we do that is because we need the app compiled with java 1.5 and OAS 10.1.3.5 would be the single major version supporting that binaries which has oc4j/orion kernel. The issue is that the security constraints in matter of user/group/role are not read by the app server, and instead of asking for these sets of users, i have to use the oc4jadmin instead the selected users for auth. All xml files needed for describing these sets of rules are being checked with the OAS book, and it seems they are correctly filled in... anybody has an idea about this?

    Read the article

  • Optimize grep, awk and sed shell stuff

    - by kockiren
    I try to sum the traffic of diffrent ports in the logfiles from "IPCop" so i write and command for my shell, but i think its possible to optimize the command. First a Line from my Logfile: 01/00:03:16 kernel INPUT IN=eth1 OUT= MAC=xxx SRC=xxx DST=xxx LEN=40 TOS=0x00 PREC=0x00 TTL=98 ID=256 PROTO=TCP SPT=47438 DPT=1433 WINDOW=16384 RES=0x00 SYN URGP=0 Now i grep with following Command the sum of all lengths who contains port 1433 grep 1433 log.dat|awk '{for(i=1;i<=10;i++)if($i ~ /LEN/)print $i};'|sed 's/LEN=//g;'|awk '{sum+=$1}END{print sum}' The for loop i need because the LEN-col is not on same position at all time. Any suggestion for optimizing this command? Regards Rene

    Read the article

  • Raw socket sendto() failure in OS X

    - by user37278
    When I open a raw socket is OS X, construct my own udp packet (headers and data), and call sendto(), I get the error "Invalid Argument". Here is a sample program "rawudp.c" from the web site http://www.tenouk.com/Module43a.html that demonstrates this problem. The program (after adding string and stdlib #includes) runs under Fedora 10 but fails with "Invalid Argument" under OS X. Can anyone suggest why this fails in OS X? I have looked and looked and looked at the sendto() call, but all the parameters look good. I'm running the code as root, etc. Is there perhaps a kernel setting that prevents even uid 0 executables from sending packets through raw sockets in OS X Snow Leopard? Thanks.

    Read the article

  • Doubts in System call mechanism in linux

    - by bala1486
    We transit from ring3 to ring0 using 'int' or the new 'syscall/sysenter' instruction. Does that mean that the page tables and other stuffs that needs to be modified for the kernel is automatically done by the 'int' instruction or the interrupt handler for the 'int 0x80' will do the required stuff and jump to the respective system call. Also when returning from a system call, we again need to go to user space. For this we need to know the instruction address in the user space to continue the user application. Where is that address stored. Does the 'ret' instruction automatically changes the ring from ring3 to ring0 or where/how this ring changing mechanism takes place? Then, i read that changing from ring3 to ring0 is not as costly as changing from ring0 to ring3. Why is this so?? Thanks, Bala

    Read the article

  • Ruby Metaprogramming

    - by VP
    I'm trying to write a DSL that allows me to do Policy.name do author "Foo" reviewed_by "Bar" end The following code can almost process it: class Policy include Singleton def self.method_missing(name,&block) puts name puts "#{yield}" end def self.author(name) puts name end def self.reviewed_by(name) puts name end end Defining my method as class methods (self.method_name) i can access it using the following syntax: Policy.name do Policy.author "Foo" Policy.reviewed_by "Bar" end If i remove the "self" from the method names, and try to use my desired syntax, then i receive an error "Method not Found" in the Main so it could not find my function until the module Kernel. Its ok, i understand the error. But how can i fix it? How can i fix my class to make it work with my desired syntax that?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >