Search Results

Search found 4453 results on 179 pages for 'kernel parametes'.

Page 144/179 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Using read() directly into a C++ std:vector

    - by Joe
    I'm wrapping up user space linux socket functionality in some C++ for an embedded system (yes, this is probably reinventing the wheel again). I want to offer a read and write implementation using a vector. Doing the write is pretty easy, I can just pass &myvec[0] and avoid unnecessary copying. I'd like to do the same and read directly into a vector, rather than reading into a char buffer then copying all that into a newly created vector. Now, I know how much data I want to read, and I can allocate appropriately (vec.reserve). I can also read into &myvec[0], though this is probably a VERY BAD IDEA. Obviously doing this doesn't allow myvec.size to return anything sensible. Is there any way of doing this that 1) Doesn't completely feel yucky from a safety/C++ perspective and 2) Doesn't involve two copies of the data block - once from kernel to user space and once from a C char * style buffer into a C++ vector. Any thoughts collective?

    Read the article

  • How do negated patterns work in .gitignore?

    - by chrisperkins
    I am attempting to use a .gitignore file with negated patterns (lines starting with !), but it's not working the way I expect. As a minimal example, I have the folllowing directory structure: C:/gittest -- .gitignore -- aaa/ -- bbb/ -- file.txt -- ccc/ -- otherfile.txt and in my gitignore file, I have this: aaa/ !aaa/ccc/ My understanding (based on this: http://ftp.sunet.se/pub//Linux/kernel.org/software/scm/git/docs/gitignore.html) is that the file aaa/ccc/otherfile.txt should not be ignored, but in fact git is ignoring everything under aaa. Am I misunderstanding this sentence: "An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again."? BTW, this is on Windows with msysgit 1.7.0.2.

    Read the article

  • How do I get property injection working in Ninject for a ValidationAttribute in MVC?

    - by jaltiere
    I have a validation attribute set up where I need to hit the database to accomplish the validation. I tried setting up property injection the same way I do elsewhere in the project but it's not working. What step am I missing? public class ApplicationIDValidAttribute : ValidationAttribute { [Inject] protected IRepository<MyType> MyRepo; public override bool IsValid(object value) { if (value == null) return true; int id; if (!Int32.TryParse(value.ToString(), out id)) return false; // MyRepo is null here and is never injected var obj= MyRepo.LoadById(id); return (obj!= null); } One other thing to point out, I have the Ninject kernel set up to inject non-public properties, so I don't think that is the problem. I'm using Ninject 2, MVC 2, and the MVC 2 version of Ninject.Web.MVC. Thanks!

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • Using netlink to obtain arp entries only returns stale entries

    - by Ben
    I'm currently trying to retrieve reachable neighbors from the arp table in a user space program written in c. I've looked through the source code to the "ip neigh" command (ipneigh.c) and it appears that I should use the flag NUD_REACHABLE. struct { struct nlmsghdr n; struct ndmsg r; } req; memset(&req, 0, sizeof(req)); req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ndmsg)); req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_ROOT; req.n.nlmsg_type = RTM_GETNEIGH; req.r.ndm_state = NUD_REACHABLE; However, when I look at the data returned from the kernel, I only have stale entries. In fact, no matter what I put for req.r.ndm_state it seems to return only entries marked as stale by ip neigh. The remainder of my code is modeled after this example: http://linux-hacks.blogspot.com/2009/01/sample-code-to-learn-netlink.html

    Read the article

  • Can someone please explain to me the basic function of Intents in the Android OS?

    - by K-RAN
    I'm new to programming applications for the Android OS. As far as general architecture of the OS goes, I understand that processes are implemented as Linux processes and that each one is sandboxed. However, I'm utterly confused on the IPCs and syscalls (if any) used. I know that the IBinder is a form of this; parcels are sent back and forth between processes and Bundles are array forms of parcels (?). But even that is still unfamiliar to me. Same with Intents. All in all, I don't understand what kinds of IPCs are implemented and how. Could someone briefly explain to me the specific methods used by user level applications in Android OS to communicate with each other and the OS? I've done kernel programming and played with various IPCs in Linux (Ubuntu and Debian) so it would help immensely if this was all explained in relation to what I'm familiar with... Thanks in advance!

    Read the article

  • How to write custom data to the TCP packet header options field with Java?

    - by snarkov
    As it is defined (see: http://www.freesoft.org/CIE/Course/Section4/8.htm) the TCP header has an 'Options' field. There are a couple of options already defined (see: www.iana.org/assignments/tcp-parameters/) but I want to come up with my very own. (For experimenting/research.) How can I get Java to write (and then read) some custom data to the options field? Bonus question: if it cannot be done with Java. what kind of application can do this? (No, I don't really feel like messing with some kernel-level TCP/IP stack implementation, I want to keep it app level.) Thanks!

    Read the article

  • How to Redirect a Python Console output to a QTextBox

    - by krishnanunni
    Hello, I'm working on developing a GUI for the recompilation of Linux kernel. For this I need to implement 4-5 Linux commands from Python. I use Qt as GUI designer. I have successfully implemented the commands using os.system() call. But the output is obtained at the console. The real problem is the output of command is a listing that takes almost 20-25 min continuous printing. How we can transfer this console output to a text box designed in Qt. Can any one help me to implement the setSource() operation in Qt using source as the live console outputs.

    Read the article

  • How to reliably measure available memory in Linux?

    - by Alex B
    Linux /proc/meminfo shows a number of memory usage statistics. MemTotal: 4040732 kB MemFree: 23160 kB Buffers: 163340 kB Cached: 3707080 kB SwapCached: 0 kB Active: 1129324 kB Inactive: 2762912 kB There is quite a bit of overlap between them. For example, as far as I understand, there can be active page cache (belongs to "cached" and "active") and inactive page cache ("inactive" + "cached"). What I want to do is to measure "free" memory, but in a way that it includes used pages that are likely to be dropped without a significant impact on overall system's performance. At first, I was inclined to use "free" + "inactive", but Linux's "free" utility uses "free" + "cached" in its "buffer-adjusted" display, so I am curious what a better approach is. When the kernel runs out of memory, what is the priority of pages to drop and what is the more appropriate metric to measure available memory?

    Read the article

  • Tool to diagonalize large matrices

    - by Xodarap
    I want to compute a diffusion kernel, which involves taking exp(b*A) where A is a large matrix. In order to play with values of b, I'd like to diagonalize A (so that exp(A) runs quickly). My matrix is about 25k x 25k, but is very sparse - only about 60k values are non-zero. Matlab's "eigs" function runs of out memory, as does octave's "eig" and R's "eigen." Is there a tool to find the decomposition of large, sparse matrices? Dunno if this is relevant, but A is an adjacency matrix, so it's symmetric, and it is full rank.

    Read the article

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • best method of turning millions of x,y,z positions of particles into visualisation

    - by Griff
    I'm interested in different algorithms people use to visualise millions of particles in a box. I know you can use Cloud-In-Cell, adaptive mesh, Kernel smoothing, nearest grid point methods etc to reduce the load in memory but there is very little documentation on how to do these things online. i.e. I have array with: x,y,z 1,2,3 4,5,6 6,7,8 xi,yi,zi for i = 100 million for example. I don't want a package like Mayavi/Paraview to do it, I want to code this myself then load the decomposed matrix into Mayavi (rather than on-the-fly rendering) My poor 8Gb Macbook explodes if I try and use the particle positions. Any tutorials would be appreciated.

    Read the article

  • Any book that covers internals of recent versions of Unix OS

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System (1996) "Unix Internals: The New Frontiers" by Uresh Vahalia (1996) (See here for 2nd edition) are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. So, Is there any recent books that covers internals of recent Unix OS? How about books on other Unix operating systems? They seem to be recent than above books but how close are they to OpenBSD/FreeBSD? Solaris 10 and OpenSolaris Kernel Architecture, 2 edition (July 20, 2006) HP-UX 11i Internals (February 1, 2004) I really don't prefer HP-UX as its not open source.

    Read the article

  • mounting without -o loop

    - by jumpinjoe
    Hi, I have written a dummy (ram disk) block device driver for linux kernel. When the driver is loaded, I can see it as /dev/mybd. I can successfully transfer data onto it using dd command, compare the copied data successfully. The problem is that when I create ext2/3 filesystem on it, I have to use -o loop option with the mount command. Otherwise mount fails with following result: mount: wrong fs type, bad option, bad superblock on mybd, missing codepage or helper program, or other error What could be the problem? Please help. Thanks.

    Read the article

  • Is it safe to catch EXCEPTION_GUARD_PAGE

    - by Michael J
    Environment is VC++ 9 on various Win platforms (XP and later) I'm writing an unhandled exception handler. I have a vague recollection from my kernel days that it was bad to catch an EXCEPTION_GUARD_PAGE, as this was generated to tell the OS to enlarge the stack. My question is twofold: Can such an exception occur in user space? If so, is it safe to catch it? I'm not especially interested in doing anything with it. I just want to know if I need to put special code in to not catch it (as I'm catching everything at the moment).

    Read the article

  • Is git svn rebase required before git svn dcommit?

    - by allyourcode
    I'm reading about using git as an svn client here: http://learn.github.com/p/git-svn.html That page suggests that you do git svn rebase before git svn dcommit, which makes perfect sense; it's like doing svn update before doing svn commit. Then, I started looking at the documentation for git svn dcommit (I was wondering what the 'd' is about): http://www.kernel.org/pub/software/scm/git/docs/git-svn.html You have to scroll down a bit to see the documentation on dcommit, which says this: Commit each diff from a specified head directly to the SVN repository, and then rebase or reset (depending on whether or not there is a diff between SVN and head). This confuses me, because if you do as the first page says, there will be no changes to pull down from svn once the first part of dcommit finishes. I'm also confused by the part that talks about reset; isn't git reset for removing changes from the staging area? Why would rebase or reset follow (the first part of) a dcommit?

    Read the article

  • Strange behavior of for loop in scheduler_tick

    - by EpsilonVector
    I'm working on Linux kernel 2.4 (homework) and I inserted the following code into the scheduler_tick function: if (unlikely(rt_task(p)) || (p->policy==SCHED_PROD && p->time_ran>=p->process_expected_time)) { /* * RR tasks need a special form of timeslice management. * FIFO tasks have no timeslices. */ if ((p->policy == SCHED_RR || /*change*/p->policy==SCHED_PROD) && !--p->time_slice) { /*changes*/ if (p->policy == SCHED_PROD){ for (i=0; i<5000; i++){ printk("I'm leeching off SCHED_RR code! %d\n", i); } } /*end changes*/ The addition was added for debugging purposes. For some reason this causes very weird behavior: when a SCHED_PROD process triggers this code (and consequently the loop that follows) the loop counts to about 4600 normally, but then goes back to 4600 each time it counts to 4800, and gets stuck in an infinite loop. What's going on?? EDIT: The i variable is my own.

    Read the article

  • Understanding top output in Linux

    - by Rayne
    Hi, I'm trying to determine the CPU usage of a program by looking at the output from Top in Linux. I understand that %us means userspace and %sy means system/kernel etc. But say I see 100%us. Does this mean that the CPU is really only doing useful work? What if a CPU is tied up waiting for resources that are not avaliable, or cache misses, would it also show up in the %us column, or any other column? Thank you.

    Read the article

  • compile error: The import xxxx cannot be resolved

    - by Zachary
    I am developing a Java project using Eclipse. The project uses another project called engine, which I have added in my project build-path. As I need to call a dabo class, called House, in one of my project class, named Window, I have used the following code as usual: import ee.asus.kernel.House; I got however the following error in compiling time: Exception in thread "main" java.lang.Error: Unresolved compilation problems: The import ee cannot be resolved House cannot be resolved to a type House cannot be resolved to a type House cannot be resolved to a type at main.ee.asus.GUI.FrameWindow.Window.<init>(Window.java:10) at main.ee.asus.GUI.StartApplication.main(StartApplication.java:13) It's worth to point out that my prject and the dabo project use the same directory/packages names. Does anyone have a clue where the error may be?

    Read the article

  • How to make external Mathematica functions interruptible?

    - by Szabolcs
    I had an earlier question about integrating Mathematica with functions written in C++. This is a follow-up question: If the computation takes too long I'd like to be able to abort it using Evaluation Abort Evaluation. Which of the technologies suggested in the answers make it possible to have an interruptible C-based extension function? How can "interruptibility" be implemented on the C side? I need to make my function interruptible in a way which will corrupt neither it, nor the Mathematica kernel (i.e. it should be possible to call the function again from Mathematica after it has been interrupted)

    Read the article

  • Customizing Log4j to filter PatternLayout

    - by JavaScriptDude
    Greetings, I have just starting migrating to WLS 10.x and have noticed that the thread name [%t] for WL is quite verbose and more informative than I need for my deployment needs. Ultimately, I only care about the thread ID but WL gives me this:    [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' ~ Does anybody know if there is a way in log4j to write a custom filter that will allow me to override PatternLayout so I can parse the WLS Thread Name to just output the thread ID which in this case above is 0. I'd rather extend then customize as it makes upgrading libraries so much easier. Thanks :) - JsD

    Read the article

  • When to choose C over C++?

    - by aaa
    Hi. I have become a fond of C++ thanks to this website. Before, I programmed exclusively in C/Fortran, thinking that C++ was too slow (not anymore). Is there a reason to write new project purely in C? this is besides obvious things like low-level kernel/system components. What about intermediate things, like communication libraries, for example MPI? Is C still more portable than C++? I have messed with pretty exotic systems, like Cray, but have yet to see non-embedded system without C++. thanks

    Read the article

  • What do you tell people your profession is? [closed]

    - by user110296
    My technical title is Member of the Technical Staff, and like you most of you, I design/write code for a living. I can never decide what to answer when someone asks what I do for a living? Software Developer? Software Engineer? [Kernel] Programmer? Computer Scientist? These all seem to have various bad connotations. I guess I like Software Engineer the best, but unfortunately this term has been coopted by people who don't actually code. I made the mistake of taking a 'Software Engineering' class, and realized that I definitely don't want to be associated with people who major in this. Probably this is too subjective, so feel free to community wiki it or whatever, but I think it is a valid question and I would like to hear what others have decided on and their reasoning.

    Read the article

  • Practicing buffer overflow attack in Ubuntu

    - by wakandan
    I am trying to learn to use buffer overflow attack in Ubuntu. Unfortunately, I cannot turn off Address Space Layout Randomization (ASLR) feature in this OS, which is turned on by default. I have tried some work around found in some fedora books: echo "0" > /proc/sys/kernel/randomize_va_space but for some reason the protection's still there. Please give me some suggestions. Thanks. [edit]Actually the above command was not successful, it said "Permission Denied", even with sudo. How can I fix that? [adding] I kept on getting segmetation fault error when it shows an address in stack. Is it related to non-executable stack in ubuntu :(?

    Read the article

  • Sockets: RAW or STREAM

    - by user1415536
    May be the question is a bit stupid, but I'll ask it. I read a lot about raw sockets in network, have seen several examples. So, basically with raw sockets it's possible to build own stack of headers, like stack = IP + TCP/UDP + OWN_HEADER. My question is, is it possible to get some kind of ready frame of first two(IP + TCP/UDP) from the linux kernel and then just append own header to them? The operating system in question is linux and the language is C. I cannot find any function which can do such a thing, but may be I'm digging in a wrong direction.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >