Search Results

Search found 85897 results on 3436 pages for 'flat file source'.

Page 16/3436 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • HTML5 Drag n Drop File Upload

    - by Paris
    I'm running a website, where I'd like yo upload files with Drag 'n Drop, using HTML5's File Api and FileReader API. I have successfully managed to create a new FileReader, but I don't know how to upload the file. My code (javascript) is the following : holder = document.getElementById('uploader'); holder.ondragover = function () { $("#uploader").addClass('dragover'); return false; }; holder.ondragend = function () { $("#uploader").removeClass('dragover'); return false; }; holder.ondrop = function (e) { $("#uploader").removeClass('dragover'); e.preventDefault(); var file = e.dataTransfer.files[0], reader = new FileReader(); reader.onload = function (event) { //I shoud upload the file now... }; reader.readAsDataURL(file); return false; }; I also have a form (id : upload-form) and an input file field (id : upload-input). Do you have any ideas? P.S. I use jQuery, that's why there is $("#uploader") and others..

    Read the article

  • What file format can I use to output a formatted text file straight from a program without having the markup be too complicated?

    - by Matt
    Premise: I am parsing a file that is quite nearly XML, but not quite. From this file I would like to extract data and output in a file that a user could open up in some program and read. To make the data reasonable, I would almost certainly need to format the text. In case it matters, I will probably be using Java to write the program. Problem: I cannot find a file format that supports formatting without having terribly complex rules and encoding problems. Attempts: I looked into a basic .txt extension first, but it does not have enough formatting advantage. I then tried a .rtf extension, but the rules for outputting text seem to be terribly complicated. It was then suggested that I used XML, but I do not understand how this file would be viewed. This appears to be probably the best solution, but I don't understand much about it. Perhaps somebody could shed some light here. In Other Words: Could somebody suggest and easy to use file format and/or shed some light on how to use XML for text formatting and viewing?

    Read the article

  • File Locked by Services (after service code reading the text file)

    - by rvpals
    I have a windows services written in C# .NET. The service is running on a internal timer, every time the interval hits, it will go and try to read this log file into a String. My issue is every time the log file is read, the service seem to lock the log file. The lock on that log file will continue until I stop the windows service. At the same time the service is checking the log file, the same log file needs to be continuously updated by another program. If the file lock is on, the other program could not update the log file. Here is the code I use to read the text log file. private string ReadtextFile(string filename) { string res = ""; try { System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Open, System.IO.FileAccess.Read); System.IO.StreamReader sr = new System.IO.StreamReader(fs); res = sr.ReadToEnd(); sr.Close(); fs.Close(); } catch (System.Exception ex) { HandleEx(ex); } return res; } Thank you.

    Read the article

  • Uninstall ruby from source

    - by vise
    I installed ruby 1.9 on my fedora 13 machine from source. I want to go back and use the older 1.8.6 (which I'll install with yum), unfortunetly it appears that I can't simply uninstall my current version by "make uninstall" (make: *** No rule to make targetuninstall'. Stop.`). Is there any way of doing this other than removing each individual file?

    Read the article

  • What sites track your open source contributions?

    - by Daniel
    I know there's at least one site where you can register the open source projects you are involved with, indicate your committer name, and it will display the lines contributed, and the languages, along a timeline. Unfortunately, I don't recall what site was that. Anyway, can anyone indicate a site that does this?

    Read the article

  • Paint Shop Pro-like open-source software?

    - by overtherainbow
    I need to add shapes that have been available in Paint Shop Pro since version 7 like curved arrows, etc. I gave Paint.Net and IrfanView a try, but they don't seem to support more sophisticated shapes than straight lines, circles, or rectangles. If this is correct, does someone know of a PSP-equivalent open-source solution for Windows? Thank you.

    Read the article

  • Best open source alternative for MS Visual Source Safe?

    - by afsharm
    We are leaving VSS for TFS or any other alternatives. I'm the one who persists to go for an open source alternative like SVN. Now I'm searching for a good open source Version Control regarding following aspects: We are in love with open source movement and cross-platform. Could it be possible to use it with Mono, SharpDevelop and Express editions of VS instead of Visual Studio itself? What about backup? Is it integrated with VS without serious problems? Any API or command prompt access? Please notice I've read following previous texts about it but still need more help: http://stackoverflow.com/questions/690766/vss-or-svn-for-a-net-project http://stackoverflow.com/questions/61959/tfs-vs-open-source-alternatives http://stackoverflow.com/questions/44588/how-to-convince-a-company-to-switch-their-source-control

    Read the article

  • Batch file to check if a file exists and raise an error if it doesn't

    - by cfdev9
    This batch file releases a build from TEST to LIVE. I want to add a check constraint in this file that ensures there is an accomanying release document in a specific folder. "C:\Program Files\Windows Resource Kits\Tools\robocopy.exe" "\\testserver\testapp$" "\\liveserver\liveapp$" *.* /E /XA:H /PURGE /XO /XD ".svn" /NDL /NC /NS /NP del "\\liveserver\liveapp$\web.config" ren "\\liveserver\liveapp$\web.live.config" web.config So I have a couple of questions about how to achieve this... There is a version.txt file in the \testserver\testapp$ folder and the only contents of this file is the build number (eg 45 - for build 45) How do I read the contents of the version.txt file into a variable in the batch file? How do I check if a file \fileserver\myapp\releasedocs\ {build}.doc exists using the variable from part 1 in place of {build}?

    Read the article

  • C Remove the first line from a text file without rewriting file

    - by Tom Van den Bon
    Hi, I've got a service which runs all the time and also keeps a log file. It basically adds new lines to the log file every few seconds. I'm written a small file which reads these lines and then parses them to various actions. The question I have is how can I delete the lines which I have already parsed from the log file without disrupting the writing of the log file by the service? Usually when I need to delete a line in a file then I open the original one and a temporary one and then I just write all the lines to the temp file except the original which I want to delete. Obviously this method will not word here. So how do I go about deleting them ?

    Read the article

  • Udev webcam rule read, but not respected?

    - by user89305
    I have two usb-webcams on them machine, but at bot they some switch /dev/video number. The solution to this problem seems to be new udev rule. I have added this rule in/etc/udev/rules.d/jj-video.rules: Fix webcam 1 KERNEL=="video1", SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0001", SYMLINK+="webcam1" Fix webcam 2 KERNEL=="video2", SUBSYSTEM=="video4linux", ATTR{name}=="Logitech QuickCam Pro 3000", KERNELS=="0000:00:1d.0", SUBSYSTEMS=="pci", DRIVERS=="uhci_hcd", ATTRS{vendor}=="0x8086", ATTRS##{device}=="0x2658", SYMLINK+="webcam2" but the symlinks are not created. I have tried many different combinations in this file. The present ones are just my lates attempts. I found the parameters in: jjk@eee-old:~$ udevadm info -a -p $(udevadm info -q path -p /class/video4linux/video1) Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{name}=="Logitech QuickCam Pro 3000" ATTR{index}=="0" ATTR{button}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0': KERNELS=="2-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="Philips webcam" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 9" ATTRS{bNumEndpoints}=="02" ATTRS{bInterfaceClass}=="0a" ATTRS{bInterfaceSubClass}=="ff" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2': KERNELS=="2-2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 3" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="a0" ATTRS{bMaxPower}=="500mA" ATTRS{urbnum}=="371076" ATTRS{idVendor}=="046d" ATTRS{idProduct}=="08b0" ATTRS{bcdDevice}=="0002" ATTRS{bDeviceClass}=="00" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="8" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="2" ATTRS{devpath}=="2" ATTRS{version}==" 1.10" ATTRS{maxchild}=="0" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{serial}=="01402100A5000000" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2': KERNELS=="usb2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bMaxPower}==" 0mA" ATTRS{urbnum}=="34" ATTRS{idVendor}=="1d6b" ATTRS{idProduct}=="0001" ATTRS{bcdDevice}=="0302" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="1" ATTRS{devpath}=="0" ATTRS{version}==" 1.10" ATTRS{maxchild}=="2" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{manufacturer}=="Linux 3.2.0-29-generic uhci_hcd" ATTRS{product}=="UHCI Host Controller" ATTRS{serial}=="0000:00:1d.0" ATTRS{authorized_default}=="1" looking at parent device '/devices/pci0000:00/0000:00:1d.0': KERNELS=="0000:00:1d.0" SUBSYSTEMS=="pci" DRIVERS=="uhci_hcd" ATTRS{vendor}=="0x8086" ATTRS{device}=="0x2658" ATTRS{subsystem_vendor}=="0x1043" ATTRS{subsystem_device}=="0x82d8" ATTRS{class}=="0x0c0300" ATTRS{irq}=="23" ATTRS{local_cpus}=="ff" ATTRS{local_cpulist}=="0-7" ATTRS{dma_mask_bits}=="32" ATTRS{consistent_dma_mask_bits}=="32" ATTRS{broken_parity_status}=="0" ATTRS{msi_bus}=="" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS=="" jjk@eee-old:~$ And tested the setup: sudo udevadm --debug test /sys/class/video4linux/video1 main: runtime dir '/run/udev' run_command: calling: test adm_test: version 175 This program is for debugging only, it does not run any program, specified by a RUN key. It may show incorrect results, because some values may be different, or not available at a simulation run. parse_file: reading '/lib/udev/rules.d/40-crda.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-fuse.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-gnupg.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-hplip.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ia64.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-inputattach.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libgphoto2-2.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libsane.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ppc.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-usb_modeswitch.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-xserver-xorg-video-intel.rules' as rules file parse_file: reading '/lib/udev/rules.d/42-qemu-usb.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-firmware.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-udev-default.rules' as rules file parse_file: reading '/lib/udev/rules.d/55-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/56-hpmud_support.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-cdrom_id.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-pcmcia.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-alsa.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-input.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-serial.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-tape.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-v4l.rules' as rules file parse_file: reading '/lib/udev/rules.d/61-accelerometer.rules' as rules file parse_file: reading '/lib/udev/rules.d/64-xorg-xkb.rules' as rules file parse_file: reading '/lib/udev/rules.d/66-xorg-synaptics-quirks.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-cd-sensors.rules' as rules file add_rule: IMPORT found builtin 'usb_id', replacing /lib/udev/rules.d/69-cd-sensors.rules:76 parse_file: reading '/lib/udev/rules.d/69-libmtp.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xorg-vmmouse.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xserver-xorg-input-wacom.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-cd.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-printers.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-udev-acl.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-cd-aliases-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-net-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-persistent-net-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-probe_mtd.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-tty-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-ericsson-mbm.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-longcheer-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-nokia-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-pcmcia-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-platform-serial-whitelist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-qdl-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-simtech-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-usb-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-x22x-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-zte-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-nm-olpc-mesh.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-graphics-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-sound-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-drivers.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-mm-candidate.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-udisks.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-brltty.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hdparm.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hplj10xx.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-keyboard-configuration.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-regulatory.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-usbmuxd.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-restore.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-ucm.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-libgpod.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-pulseaudio.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-cd-devices.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keyboard-force-release.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keymap.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-udev-late.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-dell.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-fujitsu.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-gateway.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-ibm.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-lenovo.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-toshiba.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-csr.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-hid.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-wup.rules' as rules file parse_file: reading '/lib/udev/rules.d/97-bluetooth-hid2hci.rules' as rules file parse_file: reading '/etc/udev/rules.d/jj-video.rules' as rules file udev_rules_new: rules use 259284 bytes tokens (21607 * 12 bytes), 37913 bytes buffer udev_rules_new: temporary index used 67520 bytes (3376 * 20 bytes) udev_device_new_from_syspath: device 0x215103e0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_new_from_syspath: device 0x21510758 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_read_db: device 0x21510758 filled with db file data udev_device_new_from_syspath: device 0x21510e10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0' udev_device_new_from_syspath: device 0x21511b10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2' udev_device_new_from_syspath: device 0x215132f8 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2' udev_device_new_from_syspath: device 0x21513650 has devpath '/devices/pci0000:00/0000:00:1d.0' udev_device_new_from_syspath: device 0x21513980 has devpath '/devices/pci0000:00' udev_rules_apply_to_event: GROUP 44 /lib/udev/rules.d/50-udev-default.rules:29 udev_rules_apply_to_event: IMPORT 'v4l_id /dev/video1' /lib/udev/rules.d/60-persistent-v4l.rules:7 udev_event_spawn: starting 'v4l_id /dev/video1' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_VERSION=2' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_PRODUCT=Logitech QuickCam Pro 3000' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_CAPABILITIES=:capture:' spawn_wait: 'v4l_id /dev/video1' [2609] exit with return code 0 udev_rules_apply_to_event: IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-v4l.rules:9 builtin_usb_id: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0: if_class 10 protocol 0 udev_builtin_add_property: ID_VENDOR=046d udev_builtin_add_property: ID_VENDOR_ENC=046d udev_builtin_add_property: ID_VENDOR_ID=046d udev_builtin_add_property: ID_MODEL=08b0 udev_builtin_add_property: ID_MODEL_ENC=08b0 udev_builtin_add_property: ID_MODEL_ID=08b0 udev_builtin_add_property: ID_REVISION=0002 udev_builtin_add_property: ID_SERIAL=046d_08b0_01402100A5000000 udev_builtin_add_property: ID_SERIAL_SHORT=01402100A5000000 udev_builtin_add_property: ID_TYPE=generic udev_builtin_add_property: ID_BUS=usb udev_builtin_add_property: ID_USB_INTERFACES=:0aff00:010100:010200: udev_builtin_add_property: ID_USB_INTERFACE_NUM=00 udev_builtin_add_property: ID_USB_DRIVER=Philips webcam udev_rules_apply_to_event: LINK 'v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:10 udev_rules_apply_to_event: IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-v4l.rules:16 udev_builtin_add_property: ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 udev_builtin_add_property: ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 udev_rules_apply_to_event: LINK 'v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:17 udev_rules_apply_to_event: RUN 'udev-acl --action=$env{ACTION} --device=$env{DEVNAME}' /lib/udev/rules.d/70-udev-acl.rules:74 udev_rules_apply_to_event: LINK 'webcam1' /etc/udev/rules.d/jj-video.rules:2 udev_event_execute_rules: no node name set, will use kernel supplied name 'video1' udev_node_add: creating device node '/dev/video1', devnum=81:1, mode=0660, uid=0, gid=44 udev_node_mknod: preserve file '/dev/video1', because it has correct dev_t udev_node_mknod: preserve permissions /dev/video1, 020660, uid=0, gid=44 node_symlink: preserve already existing symlink '/dev/char/81:1' to '../video1' link_find_prioritized: found 'c81:2' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' udev_device_new_from_syspath: device 0x21516748 has devpath '/devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/video4linux/video2' udev_device_read_db: device 0x21516748 filled with db file data link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' link_update: creating link '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' to '/dev/video1' node_symlink: atomically replace '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-path\x2fpci-0000:00:1d.0-usb-0:2:1.0-video-index0' link_update: creating link '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '../../video1' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/webcam1' link_update: creating link '/dev/webcam1' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/webcam1' to 'video1' udev_device_update_db: created db file '/run/udev/data/c81:1' for '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' ACTION=add COLORD_DEVICE=1 COLORD_KIND=camera DEVLINKS=/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0 /dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0 /dev/webcam1 DEVNAME=/dev/video1 DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1 ID_BUS=usb ID_MODEL=08b0 ID_MODEL_ENC=08b0 ID_MODEL_ID=08b0 ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 ID_REVISION=0002 ID_SERIAL=046d_08b0_01402100A5000000 ID_SERIAL_SHORT=01402100A5000000 ID_TYPE=generic ID_USB_DRIVER=Philips webcam ID_USB_INTERFACES=:0aff00:010100:010200: ID_USB_INTERFACE_NUM=00 ID_V4L_CAPABILITIES=:capture: ID_V4L_PRODUCT=Logitech QuickCam Pro 3000 ID_V4L_VERSION=2 ID_VENDOR=046d ID_VENDOR_ENC=046d ID_VENDOR_ID=046d MAJOR=81 MINOR=1 SUBSYSTEM=video4linux TAGS=:udev-acl: UDEV_LOG=6 USEC_INITIALIZED=18213768 run: 'udev-acl --action=add --device=/dev/video1' jjk@eee-old:~$ (and correspondingly for video2) It looks to me like my rules are read, but not respected. What am I doing wrong?

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • How to build the mainline kernel source package?

    - by Maxime R.
    Ubuntu kernel PPA only provides linux-headers*.deb and linux-image*.deb packages. How can I build the corresponding linux-source*.deb package ? Context: I'm currently running Ubuntu 11.10 with the mainline kernel (3.2 rc6 now) to get a better support for my sandybridge IGP (Dell E6420 laptop with intel i5-2520M CPU). Appears, i'd like to install this touchpad driver, ALPS touchpads being badly supported (see previous link bug report), while waiting for upstream support in kernel version 3.3. Problem is, DKMS keeps complaining about not finding the full kernel source: Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Appears I may not need the full source but I'd still like to try having it installed to see if it solve my problem. What I tried : Uncompressing the kernel.org source archive in /usr/src/. DKMS still complaining. Manually updating the kernel source package with uupdate and the mainline source package like explained here. Did not succeed. Manually building the linux-source package following @roadmr and @elmicha instructions. I eventually succeeded to build it but DKMS still complained about the missing source. At last I noticed an error I did not catch in the first place while reinstalling the kernel headers. Appears the .deb I got may have been corrupted, downloading it again did the trick :) Alas, while DKMS agreed to compile the module i ran into the following error which appears to have already been reported. This issue isn't yet solved but I won't try to because of the following: in the end I decided to test the precise kernel version 3.2-rc6 through the xorg-edgers ppa which appears to be correctly patched: it works. Nevertheless, it might still be of some interest to know how to build the mainline linux-source package as the Ubuntu Kernel Team doesn't provide it. Not to mention that I learned a lot in the process ^^

    Read the article

  • "t-sql" tool for csv files?

    - by adolf garlic
    Is there a tool that allows to query files based on the following criteria, and change them? creation date filenames (e.g. filetypeA_YYYYMMDD_data2.blah) file content For the example, the workflow could be the following one: Get all files created between date x and date y, where the filename contains z and the file itself contains value x under column zzz. Update it to be a different value/another column.

    Read the article

  • Making a file with terminal commands?

    - by iSeth
    In Windows I can write a file containing commands for cmd (usually .cmd or .bat files). When I click on those files it will open cmd.exe and run them commands the file contains. How would I do this in Ubuntu? I'm sure this is a duplicate, but I can't find my answer. Its similar to these questions, but they don't answer the question: Store frequently used terminal commands in a file CMD.exe Emulator in Ubuntu to run .cmd/.bat file

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >