Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 42/1113 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Manage Upload Permissions, SFTP & Linux

    - by John R
    I'm new to Linux. I am working with a Redhat 5.5 server and am using a Java-based SFTP script that will allow multiple users to upload text files to a server. I am undecided if each user will have a separate directory or if I will use a naming convention that includes their customer ID. The files include some personal information about their LAN settings, so I prefer to use SFTP as apposed to FTP. It is my understanding that SFTP is encrypted (Also, I have a Java class configured to upload via SFTP, so I prefer not to switch protocols unless their is a very-good reason). The prototype is for a system that will support large numbers of customers and the thought of continually adding and removing clients through the command line seems highly impractical. (Again, I am new_to/learning Linux and Redhat). What are normal conventions for giving multiple users permission to SFTP upload files with a unique username and password for each.

    Read the article

  • Removable vs fixed mount points in Linux

    - by Dave
    What makes a mount point removable in Linux? I am using Gentoo Linux with Gnome 3.2, and I find it annoying that some of my drives (ex: /dev/sdb) appear as removable but not the others (ex: /dev/sdc, /dev/sdd). They are all in /etc/fstab, with the same options. They are all mounted properly at startup, they all work fine under my own folders /mnt/drive2 /mnt/drive3 /mnt/drive4. But only one of them (the first) appears in Nautilus (and in the Gnome 3 notification tray) as mountable/removable, not the others. Can I add options to my fstab to hide it? Or can I probe using udevadm or whatever? It looks strange to be able to remove/unmount fixed drives that I never need to unmount nor remove. Any pointer would be good, thanks.

    Read the article

  • Linux - Imaging backup solution?

    - by xperator
    I want to know is there a way to make a snapshot-like backup of a linux system into a single file and restore it in another system ? You know in windows there are programs which makes a copy of a drive (like C:\ ) into a single image file. So you can restore this file later incase you are infected or something happens. Every time I want to migrate my vps into another host, I have to setup the new server from scratch and move the files manually. Can I just make a snapshot backup of the whole system and restore it somewhere else (or on the same server) ? I am not familiar with linux and I have no idea if this is technically possible or not ? Does the paritions, configs, system files,etc... are individual for each system ? I heard about rsync, but that's not what I am looking for.

    Read the article

  • Create NTFS symbolic links from within Linux

    - by rymo
    Is there a Linux utility that can create NTFS symbolic links? That is, a link on an NTFS partition that points to another NTFS folder - one that will work within Windows 7, specifically. I wish to relocate a folder that is normally in-use while Windows is running. This machine can already dual-boot into Ubuntu, so I'd like to leverage that. EDIT: To keep this from potentially turning into "which Windows Live CD is best", I will limit this question to "Is it possible with Linux, yes or no?"

    Read the article

  • setting up a WGR614v7 behind a linux box

    - by commodore fancypants
    Here's the setup, I have an openSUSE box with 2 NICs, one goes to my home network router, the other has DHCP running and it attached to a wireless router. I'm trying to get this setup to work before I switch to the linux box as my home network router. My DHCP will offer the wireless router (a WGR614v7) an address, but anything that connects to the wireless router ends up with a APIPA address. I have all the firewalls on the wireless network turned off as well as the wireless router's own DHCP. The linux box isn't offering addresses to anything past the wireless router. Is this a problem with the router or my DHCP setup? For testing purposes, I have both NICs set in the internal zone and I've tried wireless and wired connections to the WGR614v7 both to no avail.

    Read the article

  • Sporadic '.Xauthority not writable, changes will be ignored' going from OSX -> Linux

    - by Kamil Kisiel
    Every now and then when users SSH from their OS X (Snow Leopard) workstation to one of our Linux hosts they receive the message: /usr/bin/xauth: ~/.Xauthority not writable, changes will be ignored Of course, their X forwarded applications will not work at this point. However, if they log out and log right back in again they do not get the message and everything works as expected. On their Mac they get their home directory via AFP. The Linux machines get it via NFS. Any ideas on what could be going on here?

    Read the article

  • How to disable hiddev96 in linux (or tell it to ignore a specific device)

    - by Miky D
    I'm having problems with a CentOS 5.0 system when using a certain USB device. The problem is that the device advertises itself as a HID device and linux is happy to try to provide support for it: In /ver/log/messages I see a line that reads: hiddev96: USB HID 1.11 Device [KXX USB PRO] on usb-0000:00:1d.0-1 My question comes down to: Is there a way to tell linux to not use hiddev96 for that device in particular? If yes, how? If not, what are my options - can I turn hiddev96 off completely? UPDATE I should probably have been a bit more specific about what is going on. The machine is running Centos 5.0, and on top of it I'm running VMWare workstation with Windows XP - which is where the USB device is actually supposed to operate. All works fine for other USB devices (i.e. VMWare successfully connects the USB device to the guest OS and the OS can use it, but for this particular device VMWare connects it to the guest OS, but the OS can't read/write to it) Every attempt locks up the application that is trying to communicate with the device. I've reason to believe that it is because the device is a HID device and there's some contention between the Linux host and the Windows guest OS in accessing the device. Below is the output from modprobe -l|grep -i hid as requested by @Karolis: # modprobe -l | grep -i hid /lib/modules/2.6.18-53.1.14.el5/kernel/net/bluetooth/hidp/hidp.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetservo.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetkit.ko And here is the output of lsmod # lsmod Module Size Used by udf 76997 1 vboxdrv 65696 0 autofs4 24517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29633 10 hidp,rfcomm tun 14657 0 vmnet 49980 16 vmblock 20512 3 vmmon 945236 0 sunrpc 144253 1 cpufreq_ondemand 10573 1 video 19269 0 sbs 18533 0 backlight 10049 0 i2c_ec 9025 1 sbs button 10705 0 battery 13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 251393 27 lp 15849 0 snd_hda_intel 24025 2 snd_hda_codec 202689 1 snd_hda_intel snd_seq_dummy 7877 0 snd_seq_oss 32577 0 nvidia 7824032 31 snd_seq_midi_event 11073 1 snd_seq_oss snd_seq 49713 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42945 0 snd_mixer_oss 19009 1 snd_pcm_oss snd_pcm 72133 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss joydev 13313 0 sg 36061 0 parport_pc 29157 1 snd_timer 24645 2 snd_seq,snd_pcm snd 52421 13 snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer ndiswrapper 170384 0 parport 37513 2 lp,parport_pc hci_usb 20317 2 ide_cd 40033 1 tg3 104389 0 i2c_i801 11469 0 bluetooth 53925 8 hidp,rfcomm,l2cap,hci_usb soundcore 11553 1 snd cdrom 36705 1 ide_cd serio_raw 10693 0 snd_page_alloc 14281 2 snd_hda_intel,snd_pcm i2c_core 23745 3 i2c_ec,nvidia,i2c_i801 pcspkr 7105 0 dm_snapshot 20709 0 dm_zero 6209 0 dm_mirror 28741 0 dm_mod 58201 8 dm_snapshot,dm_zero,dm_mirror ahci 23621 4 libata 115833 1 ahci sd_mod 24897 5 scsi_mod 132685 3 sg,libata,sd_mod ext3 123337 3 jbd 56553 1 ext3 ehci_hcd 32973 0 ohci_hcd 23261 0 uhci_hcd 25421 0

    Read the article

  • Manual Duplex printing for Mac (and/or Linux)

    - by chris_l
    My printers don't support automatic duplex printing. I'm looking for a solution for my Mac and Linux computers that I've seen with most Windows printer drivers: Check "Manual duplex" in the printer screen Printer starts printing one side A dialog appears, asking me to flip the pages Printer prints the other side. One thing I can do, is print odd pages, then reopen the dialog and print even pages, but this is very inconvenient, especially when I only want to print a certain page range of the document as the Mac dialog forgets my previous page range every time. It gets even more inconvenient, when printing 2-up double sided, or when changing additional settings for this one printout. Is there maybe some tool, that can do this? Or maybe a "virtual printer driver" that can sit somewhere between the dialog and the actual printer driver, which manages these steps? (The Windows tool http://en.wikipedia.org/wiki/FinePrint can do something like that, but I don't need all of its features - and I need it on Mac/Linux) Thanks, Chris

    Read the article

  • quick folder access on Linux (akin to Launchy)

    - by Eli Bendersky
    Launchy is a great piece of software, I use it on Windows mainly for quickly accessing folders. I love its auto-indexing in the background, and hardly ever browse through folders manually these days, solves me lots of time. On Linux (Ubuntu 9.10), I usually "live" in the terminal, however. Therefore, Launchy on Linux (or Gnome Do, or its other replacements) are not what I need - as it opens the file manager, and I don't need the file manager. What I do need is something that indexes my folders and lets me cd into them quickly in the terminal. For example: mycd python_c Will cd to: ~/dev/scripts/python_code I hope my intention is understood :-) Are you familiar with such tools?

    Read the article

  • WinSCP equivalent for Linux/Ubuntu

    - by Shashank
    I'm shifting most of my projects to a Linux machine, and one of the things that I miss is WinSCP. I've found other answers saying that nautilus, FileZilla etc. can be used for SFTP, but something that I loved about WinSCP was that it has two panes (FileZilla's got that) and I could start synchronization from any directory. Unison or Rsync could work, but I'd have to create a folder pair every time I want to sync two folders. Is there an SFTP client for Linux that has a two-paned view and allows ad-hoc synchronization? Thanks!

    Read the article

  • unable to install new versions of linux on my PC

    - by iamrohitbanga
    I have an MSI-RC410 motherboard with onboard ATI Radeon XPress 200 series Graphics card. I have used OpenSuse-11.0 on my PC for over an year. but when I tried to upgrade to OpenSuse-11.1, I managed to install it only to find that several features were missing. for example /dev/cdrom was missing. Also install DVD of Fedora 11 did not work. I have also tried Ubuntu-9.10 live cd. When I boot from the CD i get the initramfs command prompt. Still able to install old versions of these OSs. What is the reason for this behavior? Does it have to do anything with the new version of the linux kernel not being compatible with my hardware? What should i do to install new versions of Linux?

    Read the article

  • GNU/Linux: SAS-disk detected as /dev/sg7 - not as /dev/sdb

    - by Ole Tange
    I have just installed a SAS disk into a Debian server. It was detected correctly and everything was fine. Then I moved the SAS disk to a different Debian server, the same hardware model and running same version of Debian, but here the SAS disk is detected as /dev/sg7 and not /dev/sdb. smartctl -a /dev/sg7 works fine, but fdisk and cat hang. I tried putting the SAS disk in another slot: Same problem. How can I force the SAS disk to be detected as /dev/sdb? # uname -a Linux maxwell 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux

    Read the article

  • unable to install new versions of linux on my PC

    - by iamrohitbanga
    I have an MSI-RC410 motherboard with onboard ATI Radeon XPress 200 series Graphics card. I have used OpenSuse-11.0 on my PC for over an year. but when I tried to upgrade to OpenSuse-11.1, I managed to install it only to find that several features were missing. for example /dev/cdrom was missing. Also install DVD of Fedora 11 did not work. I have also tried Ubuntu-9.10 live cd. When I boot from the CD i get the initramfs command prompt. Still able to install old versions of these OSs. What is the reason for this behavior? Does it have to do anything with the new version of the linux kernel not being compatible with my hardware? What should i do to install new versions of Linux?

    Read the article

  • How to disable passive mode in linux ftp command

    - by nute
    I am using the "ftp" command of linux to send data to a 3rd party provider. This company states that we need to "Disable passive mode in your FTP client", and I confirm it doesn't work in passive mode. However, when I googled the linux command, I see that the "-p" flag is "the default now for all clients (ftp and pftp) due to security concerns using the PORT transfer mode. The flag is kept for compatibility only and has no effect anymore." How do I disable passive mode then? And, is it that bad?

    Read the article

  • How to properly shutdown a Linux VMware Server Host

    - by Mikee
    Hi Everybody: In our lab, we have a server running Ubuntu Linux 8.04.4 x64, with VMware Server 2.1 hosting 4 VM's. I have a major concern with regards to shutting down the host server. Mostly, how do I ensure that the guest VM's are being shut down safely? In the VMware web interface console, I have enabled: "Allow virtual machines to start and stop automatically with the system" I enabled the Default Startup Delay for 15 seconds along with the "Start next VM immediately if the VMware Tools start" option checked I enabled the Default Shutdown Delay with a 60 second shutdown delay and a Shutdown Action of "Shut Down Guest" All VM's have the VMware Tools installed and properly working. All VM's are moved up into the "Specified Order" section of "Startup Order", thus when powering the server back on, all those VM's should start up again in that specified order. When I went to shut down the server, I used the shutdown -h now command. Based on the settings I entered above, I was expecting a 4 minute shutdown, as there is an option to delay the shutdown of each VM by 60 seconds. However, that is not what happened. Instead, the server shutdown in under a minute. When I powered the server back on, only 2 VM's properly loaded. The other 2 showed the following error: "Power on Virtual Machine" failed to complete If these problems problems persist, please contact your system administrator. Details: Cannot open the disk '[location to .vmdk]' or one of the snapshot disks it depends on. Reason: Failed to lock the file. Obviously, if this error occured, then it is clear to me that the VM's were not properly shutdown, or the server powered off before the VM's were completely shutdown. I have fixed the above error by deleting the .lck files in the respective VM directories. How would I know if the VM's were properly shut down? I checked the VMware-server logs, but they only seem to display the logs of when the vmware-mgmt service is running in the current session. I'm mostly running Linux VM's, so is there an easy way to know whether or not a server was properly shut down in Linux? Thank you all for the help!

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • Sporadic '.Xauthority not writable, changes will be ignored' going from OSX -> Linux

    - by Kamil Kisiel
    Every now and then when users SSH from their OS X (Snow Leopard) workstation to one of our Linux hosts they receive the message: /usr/bin/xauth: ~/.Xauthority not writable, changes will be ignored Of course, their X forwarded applications will not work at this point. However, if they log out and log right back in again they do not get the message and everything works as expected. On their Mac they get their home directory via AFP. The Linux machines get it via NFS. Any ideas on what could be going on here?

    Read the article

  • two operating systems sharing their file systems with eachother (Windows and Linux)

    - by John Kube
    I have two operating systems installed on my notebook computer, Windows Vista and Ubuntu Linux. When I boot up, I'm presented with a bootloader which allows me to choose which one I want to load. I'm interested in sharing each operating system's file system with the other, such that I could access my Windows files from Linux and vice-versa. Is this possible, and if so how would one go about setting it up? Feel free to just post a link to an existing solution if there is one. I would Google for this myself, but I don't even know what to search for, as I don't know what this is called.

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • How does Mac's command line compare to Linux?

    - by Nathan Long
    I love Ubuntu Linux - especially the commmand line. But I have to admit that, at least for now, Windows is more user-friendly - there's more software for it, more drivers, and more stuff just works. Knowing that Mac is built on Unix makes me wonder if it's the sweet spot between them. But I wonder: how similar is the Mac command line to Linux's bash? Could I pick right up with using vim and bash scripting and git, etc? Would common commands like changing directories be different? Does anybody know an online "compare and contrast" resource?

    Read the article

  • Printing from a Linux using an Acer Aspire Netbook

    - by JoelFan
    A friend asked me to check why their Linux Acer Aspire netbook can't print to their HP printer. When I plugged in the USB cable, the a "balloon" popped up on the netbook saying it was installing the printer. But the printing does not work. I was able to get into a Settings area and click on Print Test Page but nothing happened. If it was Ubuntu, I would go into the Log File Viewer, but I couldn't find that on whatever Linux flavor the Acer is running. I couldn't even figure out how to get to a terminal (shell) window. I tried searching the HP and Acer sites but nothing seems to apply to this issue.

    Read the article

  • Best linux distro for cuda development

    - by user18151
    Can someone suggest the best linux distro for CUDA development. The reason why I'm asking is that I tried installing the latest cuda SDK in Fedora 12, and it was a real pain in the neck. It took me 8 hours to get the nouveau driver removed and the nvidia-driver installed. After that somehow the OS decides to act up and blow up the /var/log/message file to 9 GB and eat up all my remaining space, with strange errors. I don't even understand what happened further, but my Nvidia drives don't work anymore. Please don't flame me, I'm NOT a windows fanboy or anything. I've been using Linux since 2002, and actually like it. Its just my personal experience. Would be really helpful for positive suggestions. Fanboys, please stay aside. Thanks in advance.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >