Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 354/1328 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • Cygwin socket & thread & other programming issues (some question about Cygwin)

    - by SjB
    I have some question about cygwin : Can I use Cygwin develop socket based code? Does Cygwin have read() and write() functions that work with file descriptors? Can I use Pthread library in Cygwin? Does code that compiles in Cygwin also compile in Linux without any change or with little change? Will an executable file that built by Cygwin run in Linux ? Why does Cygwin not need the linker option -lpthread when I use pthread library? why in #include <iostream> don't I need to use using namespace std; ? Can I work with QT in Cygwin? If so, How?

    Read the article

  • User management API

    - by Akshey
    Hi, I am developing an application suite where users will need to connect to a server and depending on their account type they will be given some services. The server will run Linux. Can you please suggest me some user management API which I can use to develop the server program? By user management I mean user authentication and other related functionalities. I prefer to work in C++ or python, but any other language should not be a problem. Please note that this application suite is not web based. Due to security issues, I do not want to give each user a separate account on the linux server. Thanks, Akshey

    Read the article

  • Does the combo of PHP5, MySQL, and a Macbook Pro constitute a LAMP stack? If not, what does?

    - by RedEye
    Hello - I mostly code in Visual Studio, I like it, but lately it's making me feel a little claustrophobic. On my MacBook Pro, I've set up PHP5 and MySQL (natively). With the built-in server on the mac, does this constitute a LAMP stack? Is Mac OSX considered a Linux Environment? I have VMWare Fusion 3, should I set up a Linux OS virtually in order to implement a LAMP stack? Should I just use CakePHP or Zend? Any guidance would be greatly appreciated.

    Read the article

  • Writing data over RxTx using usbserial?

    - by Jeach
    I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial. One of my biggest problems is that the RxTx SerialPortEvent.OUTPUT_BUFFER_EMPTY does not work on linux over usbserial. How do I know when I should write to the stream? Any indicators I might have missed? So far my experience with writing and reading concurrently have not been great. Does anyone know if I should lock the DATA_AVAILABLE handler from being invoked while I'm writing on the stream? Or RxTx accepts concurrent read/writes? Thanks in advance

    Read the article

  • Mercurial hg clone error - "abort: error: Name or service not known"

    - by Bojan Milankovic
    I have installed the latest hg package available for Fedora Linux. However, hg clone reports an error. hg clone http://localmachine001:8000/ repository reports: "abort: error: Name or service not known" localmachine001 is a computer within the local network. I can ping it from my Linux box without any problems. I can also use the same http address and browse the existing code. However, hg clone does not work. If I execute the same command from my Macintosh machine, I can easily clone the repository. Some Internet resources recommend editing .hgrc file, and adding proxy to it: [http_proxy] host=proxy:8080 I have tried that without any success. Also, I assume that proxy is not needed in this case, since the hg server machine is in my local network. Can anyone recommend me what should I do, or how could I track the problem? Thank you in advance.

    Read the article

  • .NET SerialPort DataReceived event thread interference with main thread

    - by Kiran
    I am writing a serial communication program using the SerialPort class in C# to interact with a strip machine connected via a RS232 cable. When i send the command to the machine it responds with some bytes depending on the command. Like when i send a "\D" command, i am expecting to download the machine program data of 180 bytes as a continous string. As per the machine's manual, it suggests as a best practice to send an unreognized characters like comma (,) character to make sure the machine is initialized before sending the first command in the cycle. My serial communication code is as follows: public class SerialHelper { SerialPort commPort = null; string currentReceived = string.Empty; string receivedStr = string.Empty; private bool CommInitialized() { try { commPort = new SerialPort(); commPort.PortName = "COM1"; if (!commPort.IsOpen) commPort.Open(); commPort.BaudRate = 9600; commPort.Parity = System.IO.Ports.Parity.None; commPort.StopBits = StopBits.One; commPort.DataBits = 8; commPort.RtsEnable = true; commPort.DtrEnable = true; commPort.DataReceived += new SerialDataReceivedEventHandler(commPort_DataReceived); return true; } catch (Exception ex) { return false; } } void commPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { SerialPort currentPort = (SerialPort)sender; currentReceived = currentPort.ReadExisting(); receivedStr += currentReceived; } internal int CommIO(string outString, int outLen, ref string inBuffer, int inLen) { receivedStr = string.Empty; inBuffer = string.Empty; if (CommInitialized()) { commPort.Write(outString); } System.Threading.Thread.Sleep(1500); int i = 0; while ((receivedStr.Length < inLen) && i < 10) { System.Threading.Thread.Sleep(500); i += 1; } if (!string.IsNullOrEmpty(receivedStr)) { inBuffer = receivedStr; } commPort.Close(); return inBuffer.Length; } } I am calling this code from a windows form as follows: len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) len = SerialHelperObj.CommIO(",",1,ref inBuffer, 4) If(inBuffer == "!?*O") { len = SerialHelperObj.CommIO("\D",2,ref inBuffer, 180) } A valid return value from the serial port looks like this: \D00000010000000000010 550 3250 0000256000 and so on ... I am getting some thing like this: \D00000010D,, 000 550 D,, and so on... I feel that my comm calls are getting interferred with the one when i send commands. But i am trying to make sure the result of the comma command then initiating the actual command. but the received thread is inserting the bytes from the previous communication cycle. Can any one please shed some light into this...? I lost quite some hair just trying to get this work. I am not sure where i am doing wrong

    Read the article

  • I need help with this ogre dependent header (Qgears)

    - by commodore
    I'm 2 errors away from compiling Qgears. (Hacked Version of the Final Fantasy VII Engine) I've messed with the preprocessors to load the actual location of the ogre header files. Here are the errors: ||=== qgears, Debug ===| /home/cj/Desktop/qgears/trunk/project/linux/src/core/TextManager.h|48|error: invalid use of ‘::’| /home/cj/Desktop/qgears/trunk/project/linux/src/core/TextManager.h|48|error: expected ‘;’ before ‘m_LanguageRoot’| ||=== Build finished: 2 errors, 0 warnings ===| Here's the header file: // $Id$ #ifndef TEXT_MANAGER_h #define TEXT_MANAGER_h #include <OGRE/OgreString.h> #include <OGRE/OgreUTFString.h> #include <map> struct TextData { TextData(): text(""), width(0), height(0) { } Ogre::String name; Ogre::UTFString text; int width; int height; }; typedef std::vector<TextData> TextDataVector; class TextManager { public: TextManager(void); virtual ~TextManager(void); void SetLanguageRoot(const Ogre::String& root); void LoadTexts(const Ogre::String& file_name); void UnloadTexts(const Ogre::String& file_name); const TextData GetText(const Ogre::String& name); private: struct TextBlock { Ogre::String block_name; std::vector<TextData> text; } Ogre::String m_LanguageRoot; // Line #48 std::list<TextBlock> m_Texts; }; extern TextManager* g_TextManager; #endif // TEXT_MANAGER_h The only header file that's in include that's not a ogre header file is "map". If it helps, I'm using the Code::Blocks IDE/GCC Compiler in GNU/Linux. (Arch) I'm not sure even if I get this header fixed, I think I'll have build errors latter, but it's worth a shot. Edit: I added the semicolon and I have one more error in the header file: error: expected unqualified-id before ‘{’ token

    Read the article

  • Can't get Logback Eclipse plugin to display output

    - by Zombies
    I followed these instructions here: http://logback.qos.ch/consolePlugin.html I have the correct and found logback.xml, it is set up correctly, and the port is listening. Nothing shows up with logger.error("Test"); It logs to sysout fine when I remove logback.xml, which shows to me that the logback is working fine. I installed the plugin on linux by moving it to /usr/lib/eclipse/plugins ...The window shows up, but no logging events are showing up. I also added a catch all ACCEPT filter like on that link. Perhaps this is a linux permission issue?

    Read the article

  • undefined reference linker error

    - by klaus-johan
    Hi, I've stuck myself in a c++ project under linux ,for which I get an undefined reference when I try to create an object of a class that I just wrote.I believe this is an linker error caused by the fact that somewhere , somehow I should tell the linker to take into account the new class. I looked at the project properties and at the run command it executes a script (cmake.sh) . Because the project wasn't created by me , and because I'm a novice in working under linux, I just don't know how to direct the linker to do what I expect him to do !

    Read the article

  • Why GPRS modem provides embedded TCP/IP stack

    - by Christian Madsen
    My colleague and I are mining the GPRS MODEM market for a module suitable for use with embedded Linux. During the market scan, we see that several vendors highlight that their MODEMs include an embedded TCP/IP stack. This makes me wonder: when we are using embedded Linux which already contains a TCP/IP stack and connects using PPP, will it make use of the stack included in the GPRS MODEM at all? My current assumption is that the stack is included for use with tiny microcontroller OS that do not supply their own stack. Also some of the MODEMs allow for running small applications IN the MODEM baseband processor which could explain the embedded stack... So: is the TCP/IP stack supplied by the GPRS MODEM superfluous when using it with an HL OS or did I overlook something?

    Read the article

  • What happens after a packet is captured?

    - by Rayne
    Hi all, I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC - kernel memory - User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates? Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right? Thank you. Regards, Rayne

    Read the article

  • How to find connected hosts at network (vpn or lan)

    - by Javier Novoa C.
    Hello, I'm looking for possible solutions to the following need: I have a VPN configured (using openVPN over Linux, BTW), and I want to know at any moment which hosts are connected to it. I recognize that it probably is the same thing as trying to know which hosts are connected to a lan, so any of the solutions might do the job... The fact is that I once used a hamachi vpn on linux and with it I had the chance to know which hosts were connected to a particular network where I belonged, so I was wondering if something similar might be possible in openVPN (or even any VPN and/or any LAN). Preferably, I'm looking for opensource/free sw solutions, or maybe the hints to program it myself (in the most simple way if possible, not that I don't know how to program, but I'm trying to achieve this in a simple manner). But anyway, if there are no os/fsw solutions, any other one might do... Thanks a lot! Javier, Mexico city

    Read the article

  • How can I include platform-specific native libraries in the .JAR file using Eclipse?

    - by Martin Wiboe
    Hello all, I am just starting to learn JNI. I have been following a simple example, and I have created a Java app that calls a Hello World method in a native library. I'd like to target Win32 and Linux x86. My library resides in a DLL, and I can call it just fine using LoadLibrary when the DLL is added to the root of my Eclipse project. However, I can't figure out how to get Eclipse to export a runnable JAR that includes the DLL and the .SO file for Linux. So my question is basically; how would you go about creating a project in Eclipse and include several versions of the same native library? Thank you, Martin

    Read the article

  • How to find which type of system call is used by a program

    - by bala1486
    I am working on x86_64 machine. My linux kernel is also 64 bit kernel. As there are different ways to implement a system call (int 80, syscall, sysenter), i wanted to know what type of system call my machine is using. I am newbie to linux. I have written a demo program. include int main() { getpid(); return 0; } getpid() does one system call. Can anybody give me a method to find which type of system call will be used by my machine for this program.. Thank you....

    Read the article

  • Understanding the output of ldd

    - by nebukadnezzar
    I'm having a hard time understanding the output of ldd - Especially the processor identifiers. The string in question is this one: Shortest.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, from ']', not stripped I have several questions about it: What does "ELF" mean? I know that's what Linux binaries are called like (Windows Binaries are called PE Binaries, "Portable Executable" Binaries), but isn't ELF an abbreviation for something? What does LSB mean? I can't even guess it... I see the string "Intel" there, now I seriously wonder about the portability of Linux binaries, as ldd seems to expect every binary to be compiled on a intel processor... but what if it wasn't compiled on a Intel processor? Or when I attempt to run the binary on a computer that doesn't run ontop of a Intel processor? Why the ']'? My guess is it should be some sort of Linker identify, but ']' doesn't look much like a Identifier... Thanks in advance

    Read the article

  • what is the difference between "./somescript.sh" and ". ./somescript.sh"

    - by Peter
    This question may sounds silly to you. Today I was following some instructions to install a software in Linux. There was a script that needs to be run first. It set some environment variables. The instruction told me to execute . ./setup.sh, but I made a mistake by executing ./setup.sh. So the env was not set. Finally I noticed this and proceeded. I want to know what exactly is the difference between both? I am completely new to Linux so please be as elaborate as possible.

    Read the article

  • Is there an ftp plugin for gedit that will let me work locally?

    - by RobertWHurst
    I'm trying to switch from a windows environment to Linux. I'm primarily PHP developer, but I do know quite a bit about other languages such as CSS, XHTML and Javascript. I need a way of editing my files locally because I work in a git repository and need to commit my saves. On windows I used Aptana and PDT. I'd save my files, upload via Aptana, then commit my work with git. I need to get a work flow going on my Linux machine now. If you know a better way to do this let me know, however my real question is, is there a plugin that allows gedit to upload files instead of working remotely?

    Read the article

  • MySql UDF using shared library won't load

    - by Jarrod
    I am attempting to create a mysql UDF which will match a fingerprint using Digital Persona's free linux SDK library. I have written a trivial UDF as a learning experience which worked fine. However, when I added a dependency to Digital Persona's shared object I can no longer get MySql to load my UDF. I added includes to DP's headers and compiled my UDF using: gcc -fPIC -Wall -I/usr/src/mysql-5.0.45-linux-i686-icc-glibc23/include -shared -o dp_udf.so dp_udf.cc I also tried adding the -static argument, but whenever I restart MySql, I get the error: Can't open shared library 'dp_udf.so' (errno: 0 /usr/local/mysql/lib/plugin/dp_udf.so: undefined symbol: MC_verifyFeaturesEx) MC_verifyFeaturesEx is a function defined "dpMatch.h" which I included, and is implemented in libdpfpapi.so which I have tried placing in the same location as my dp_udf.so and in /usr/lib. Am I doing something wrong with my call to gcc (my C++ skills are rusty) or does MySql not allow UDFs to use additional shared objects?

    Read the article

  • How to set up linux watchdog daemon with Intel 6300esb

    - by ACiD GRiM
    I've been searching for this on Google for sometime now and I have yet to find proper documentation on how to connect the kernel driver for my 6300esb watchdog timer to /dev/watchdog and ensure that watchdog daemon is keeping it alive. I am using RHEL compatible Scientific Linux 6.3 in a KVM virtual machine by the way Below is everything I've tried so far: dmesg|grep 6300 i6300ESB timer: Intel 6300ESB WatchDog Timer Driver v0.04 i6300ESB timer: initialized (0xffffc900008b8000). heartbeat=30 sec (nowayout=0) | ll /dev/watchdog crw-rw----. 1 root root 10, 130 Sep 22 22:25 /dev/watchdog | /etc/watchdog.conf #ping = 172.31.14.1 #ping = 172.26.1.255 #interface = eth0 file = /var/log/messages #change = 1407 # Uncomment to enable test. Setting one of these values to '0' disables it. # These values will hopefully never reboot your machine during normal use # (if your machine is really hung, the loadavg will go much higher than 25) max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 # Note that this is the number of pages! # To get the real size, check how large the pagesize is on your machine. #min-memory = 1 #repair-binary = /usr/sbin/repair #test-binary = #test-timeout = watchdog-device = /dev/watchdog # Defaults compiled into the binary #temperature-device = #max-temperature = 120 # Defaults compiled into the binary #admin = root interval = 10 #logtick = 1 # This greatly decreases the chance that watchdog won't be scheduled before # your machine is really loaded realtime = yes priority = 1 # Check if syslogd is still running by enabling the following line #pidfile = /var/run/syslogd.pid Now maybe I'm not testing it correctly, but I would expecting that stopping the watchdog service would cause the /dev/watchdog to time out after 30 seconds and I should see the host reboot, however this does not happen. Also, here is my config for the KVM vm <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit sl6template or other application using the libvirt API. --> <domain type='kvm'> <name>sl6template</name> <uuid>960d0ac2-2e6a-5efa-87a3-6bb779e15b6a</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <vendor>Intel</vendor> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='vme'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='rdtscp'/> <feature policy='require' name='ht'/> <feature policy='require' name='dca'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='require' name='aes'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/data/vms/sl6template.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:57:f6'/> <source bridge='br0.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:88:0f:42'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <watchdog model='i6300esb' action='reset'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </watchdog> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Any help is appreciated as the most I've found are patches to kvm and general softdog documentation or IPMI watchdog answers.

    Read the article

  • ptrace'ing of parent process

    - by osgx
    Hello Can child process use the ptrace system call to trace its parent? Os is linux 2.6 Thanks. upd1: I want to trace process1 from "itself". It is impossible, so I do fork and try to do ptrace(process1_pid, PTRACE_ATTACH) from child process. But I can't, there is a strange error, like kernel prohibits child from tracing their parent processes UPD2: such tracing can be prohibited by security policies. Which polices do this? Where is the checking code in the kernel? UPD3: on my embedded linux I have no errors with PEEKDATA, but not with GETREGS: child: getregs parent: -1 errno is 1, strerror is Operation not permitted errno = EPERM

    Read the article

  • malloc()/free() behavior differs between Debian and Redhat

    - by StasM
    I have a Linux app (written in C) that allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not returned to the OS but stays allocated to the process. Now, the interesting thing here is that this behavior happens only on RedHat Linux and clones (Fedora, Centos, etc.) while on Debian systems the memory is returned back to the OS after all freeing is done. Any ideas why there could be the difference between the two or which setting may control it, etc.?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >