Search Results

Search found 69034 results on 2762 pages for 'file locking'.

Page 83/2762 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • reference to XML file is not a member of the R file

    - by yoavstr
    how can i had to class layout in R another xml file ? it should b autmatic as i had new resources to res but it's not someone knows what i did wrong ? i open an activity and now i want to open another activity that will work with another xml example i have menu and main.xml now i want to go for anther activity called gamescreen using this method : newGameButton.setOnClickListener(new OnClickListener() { public void onClick(View view) { Intent i = = new Intent(this, gameScreen.class); startActivity(i); } } i want to move to another "page" to another activity called gameScreen which should b associated to the xml called gameScreen.xml but in his onCreate : public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.gameScreen); } and gameScreen is not a member of the R file please help me i am sitting for the last 4 hours felling like an idiot ...

    Read the article

  • How to prevent illegal file and folder name creation in Windows Server 2003 or Windows Server 2008

    - by Joel Thibeault
    Preventing illegal file and Folder name creation on a Windows 2003/2008 file server is the goal. We know from articles like http://stackoverflow.com/questions/62771/how-check-if-given-string-is-legal-allowed-file-name-under-windows that for some reason the file system allows creation of illegal file/folder chacters and paths that exceed the limitations of Windows. I need the following question answered: How to remove cabability to create file or folder creation in NTFS that contains invalid characters? Can you remove the POSIX subsystem from Windows to fix this issue? How does disabling 8.3 dos name creation factor into this issue? Will any of these fixes prevent linux clients from creating windows compliant files?

    Read the article

  • C++ file including C header file

    - by fdeslaur
    I need to include a C header file in my C++ project but g++ throws "not declared in this scope" errors. I read that i need to use extern "C" keyword to fix it but it didn't seem to work for me. Here is a dummy example triggering this error. main.cpp: #include <iostream> extern "C" { #include "includedFile.h" } int main() { int a = 2; int b = 1212; std::cout<< "Hello World!\n"; return 0; } includedFile.h #include <stdint.h> enum TypeOfEnum { ONE, TWO, THREE, FOUR = INT32_MAX, }; The error thrown is : $> g++ main.cpp In file included from main.cpp:4:0: includedFile.h:7:9: error: ‘INT32_MAX’ was not declared in this scope FOUR = INT32_MAX, I saw on this post that I may need #define __STDC_LIMIT_MACROS without any success. Any help is welcome!

    Read the article

  • Permission denied error when trying to install with wubi

    - by badri
    I run into problem when tried to install Ubuntu 11.04 on Windows 7 using wubi installer . It downloads the image for sometime and ends up with the error that says Permission denied: for more details see the log file In the log it seems to be like DownloadError: Problem connecting to tracker - urlopen error (10060, 'Operation timed out') but my network is good and I checked it. Tried using wubi several times, but ends up with same problem. Log content: 10-08 16:56 DEBUG TaskList: ### Finished get_metalink 10-08 16:56 DEBUG TaskList: New task download 10-08 16:56 DEBUG TaskList: ### Running download... 10-08 16:56 DEBUG btdownloader: downloading http://releases.ubuntu.com/11.04/ubuntu-11.04-desktop-amd64.iso.torrent > C:\ubuntu\install\ubuntu-11.04-desktop-amd64.iso 10-08 18:02 ERROR TaskList: Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 229, in listen_forever File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 221, in listen_forever File "\lib\bittorrent\Rerequester.py", line 96, in fail File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Problem connecting to tracker - urlopen error (10060, 'Operation timed out') Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\btdownloader.py", line 79, in download File "\lib\bittorrent\download.py", line 303, in download File "\lib\bittorrent\RawServer.py", line 256, in listen_forever File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 229, in listen_forever File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 221, in listen_forever File "\lib\bittorrent\Rerequester.py", line 96, in fail File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Problem connecting to tracker - urlopen error (10060, 'Operation timed out') 10-08 18:02 ERROR TaskList: Non fatal error Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 229, in listen_forever File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Traceback (most recent call last): File "\lib\bittorrent\RawServer.py", line 221, in listen_forever File "\lib\bittorrent\Rerequester.py", line 96, in fail File "\lib\wubi\backends\common\btdownloader.py", line 70, in error_callback DownloadError: Problem connecting to tracker - urlopen error (10060, 'Operation timed out') in task download 10-08 18:02 DEBUG TaskList: ### Finished download 10-08 18:02 ERROR TaskList: [Errno 13] Permission denied: u'C:\\ubuntu\\install\\ubuntu-11.04-desktop-amd64.iso' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 492, in get_iso File "\lib\wubi\backends\common\backend.py", line 347, in download_iso OSError: [Errno 13] Permission denied: u'C:\\ubuntu\\install\\ubuntu-11.04-desktop-amd64.iso' 10-08 18:02 DEBUG TaskList: # Cancelling tasklist 10-08 18:02 DEBUG TaskList: # Finished tasklist 10-08 18:02 ERROR root: [Errno 13] Permission denied: u'C:\\ubuntu\\install\\ubuntu-11.04-desktop-amd64.iso' Traceback (most recent call last): File "\lib\wubi\application.py", line 57, in run File "\lib\wubi\application.py", line 131, in select_task File "\lib\wubi\application.py", line 157, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 492, in get_iso File "\lib\wubi\backends\common\backend.py", line 347, in download_iso OSError: [Errno 13] Permission denied: u'C:\\ubuntu\\install\\ubuntu-11.04-desktop-amd64.iso'

    Read the article

  • Wireless cuts out on Toshiba Satellite S7208

    - by alecRN
    I recently got a Toshiba Satellite L875-S7208 with Windows 7 preinstalled. I installed Ubuntu 12.04 LTS dual boot to the same Windows partition. However, usually 15 minutes or less after booting, the wifi connection dies. Here's some hopefully relevant information: lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: i915 Kernel modules: i915 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: xhci_hcd 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb40] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 3 [8086:1e14] (rev c4) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel driver in use: ahci 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) Subsystem: Toshiba America Info Systems Device [1179:fb41] Kernel modules: i2c-i801 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:8211] Kernel driver in use: rtl8192ce Kernel modules: rtl8192ce 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) Subsystem: Toshiba America Info Systems Device [1179:fb37] Kernel driver in use: r8169 Kernel modules: r8169 lsmod Module Size Used by snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 224066 1 joydev 17693 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 arc4 12529 2 snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq psmouse 87692 0 serio_raw 13211 0 rtl8192ce 84826 0 rtl8192c_common 75767 1 rtl8192ce rtlwifi 111202 1 rtl8192ce mac80211 506816 3 rtl8192ce,rtl8192c_common,rtlwifi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device sparse_keymap 13890 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev mac_hid 13253 0 mei 41616 0 wmi 19256 0 soundcore 15091 1 snd i915 472941 3 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm drm_kms_helper 46978 1 i915 cfg80211 205544 2 rtlwifi,mac80211 drm 242038 4 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 video 19596 1 i915 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 ums_realtek 18248 0 uas 18180 0 usb_storage 49198 1 ums_realtek dmesg | grep firmware [ 15.692951] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 16.240881] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 452.419288] rtl8192c_common:rtl92c_firmware_selfreset(): 8051 reset fail. [ 458.572211] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 465.440640] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 472.337617] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 479.175471] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 485.978582] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 492.764893] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 499.579348] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 506.386934] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 513.209545] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 519.991365] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 526.778375] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 533.629695] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 540.426004] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 547.238125] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 554.024434] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 560.854794] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 567.678160] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 574.494666] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 581.336653] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 588.157710] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 595.221122] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 602.047429] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 608.829534] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 615.639079] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 622.454991] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 629.273231] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 636.056613] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 642.858096] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 649.640753] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 657.184094] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 664.008018] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 670.838639] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 677.675418] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 684.507255] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 691.310994] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 698.095325] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 704.914509] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin [ 711.725178] rtl8192c_common: Loading firmware file rtlwifi/rtl8192cfw.bin uname -r 3.2.0-29-generic ifconfig eth0 Link encap:Ethernet HWaddr 4c:72:b9:59:6c:61 inet addr:192.168.0.11 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::4e72:b9ff:fe59:6c61/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4447 errors:0 dropped:0 overruns:0 frame:0 TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3671147 (3.6 MB) TX bytes:335133 (335.1 KB) Interrupt:42 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:515 errors:0 dropped:0 overruns:0 frame:0 TX packets:515 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:83153 (83.1 KB) TX bytes:83153 (83.1 KB) wlan0 Link encap:Ethernet HWaddr 74:e5:43:32:47:95 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:280 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32958 (32.9 KB) TX bytes:10431 (10.4 KB)

    Read the article

  • Windows file sharing with a private LAN when a public VPN is connected?

    - by netvope
    OS: Windows Vista My LAN interface is configured as a "private network". I want to have all the sharing and discovery features (Network Discovery, File Sharing, Public Folder Sharing, Printer Sharing, Password Protected Sharing, and Media Sharing), so I enabled them all. My VPN interfaces are configured as "public networks", and I do NOT want to have any of the above features. Now the problem is that if I disabled these sharing features while a VPN is connected, it affects both interfaces. I guess the Network and Sharing Center is probably an oversimplified tool that may not support multiple interfaces. Where can I tell Windows to enable sharing features for the private networks and not the public networks? For file sharing, I think I can disable "File and Printer Sharing for MS Networks" in each of the VPNs' properties. However, I will need to disable it every time I add a new VPN. Moreover, I can't find how to disable Media Sharing by this way. If this can be more easily done in Windows XP or 7, please let me know.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • Include all php files in one file and include that file in every page if we're using hiphop?

    - by Hasan Khan
    I understand that in normal php if we're including a file we're merging the source of it in the script and it would take longer for that page to be parsed/processed but if we're using HipHop shouldn't it be ok to just create one single php file and include every file in it (that contains some class) and every page which needs those classes (in separate file each) can just include one single php file? Would this be ok in presence of HipHop?

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • File Server Resource Manager attempting to access quota.xml on System Reserved partition?

    - by pmellett
    I've got a new install of Server 2008 R2 that is designed to be our quota server for user home directories and shared areas. I installed FSRM and set up a few quotas to try out. They worked fine but at some point over the weekend it's stopped loading the FSRM console quota screen and gives the following error, with Event ID 8228: File Server Resource Manager was unable to access the following file or volume: '\\?\Volume{73649de6-7f04-11e1-a344-005056b10310}\System Volume Information\SRM\quota.xml'. This file or volume might be locked by another application right now, or you might need to give Local System access to it. I have removed and reinstalled the FSRM Role Service, cleared the \System Volume Information\SRM folder on each volume and am at the verge of just starting again. I'd rather not since then I have to go through and set up all my NTFS permissions again. Since it looks like the service is trying to access the System Reserved partition, which I assume won't have any files it could possibly need, how do I remove System Reserved partition as a volume to be monitored for the quota service? (I am not aware of configuring that to be the case originally though!)

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Why do we need a format for binary executable files

    - by user3671483
    When binary files (i.e. executables) are saved they usually have a format (e.g. ELF or .out) where we have a header containing pointers to where data or code is stored inside the file. But why don't we store the binary files directly in the form of sequence of machine instructions.Why do we need to store data separately from the code?Secondly when the assembler creates a binary file is the file is among the above formats?

    Read the article

  • Strange File-Server I/O Spikes - What Is Causing This?

    - by CruftRemover
    I am currently having a problem with a small Linux server that is providing file-sharing services to four Windows 7 32-bit clients. The server is an AMD PhenomX3 with two Western Digital 10EADS (1TB) drives, attached to a Gigabyte GA-MA770T-UD3 mainboard and running Ubuntu Server 10.04.1 LTS. The client machines are taking an extremely long time to access/transfer data on the file server. Applications often become non-responsive while trying to open files located remotely, or one program attempting to open a file but having to wait will prevent other software from accessing network resources at all. Other examples include one image taking 20 seconds or more to open, and in one instance a user waited 110 seconds for Microsoft Word 2007 to save a document. I had initially thought the problem was network-related, but this appears not to be the case. All cables and switches have been tested (one cable was replaced) for verification. This was additionally confirmed when closing down all client machines and rebooting the server resulted in the hard-drive light staying on solid during the startup process. For the first 15 minutes during boot, logon and after logging on (with no client machines attached), the system displayed a load average of 4 or higher. Symptoms included waiting several minutes for the logon prompt to appear, and then several minutes for the password prompt to appear after typing in a user name. After logon, it also took upwards of 45 seconds for the 'smartctl' man page to appear after the command 'man smartctl' was issued. After 15 minutes of this behaviour, the load average dropped to around 0.02 and the machine behaved normally. I have also considered that the problem is hard-drive-related, however diagnostic programs reveal no drive problems. Western Digital DLG, Spinrite and SMARTUDM show no abnormal characteristics - the drives are in perfect health as far as the hardware is concerned. I have thus far been completely unable to track down the cause of this problem, so any help is greatly appreciated. Requested Information: Output of 'free' hxxp://pastebin.com/mfsJS8HS (stupid spam filter) The command 'hdparm -d /dev/sda1' reports: HDIO_GET_DMA failed: Inappropriate ioctl for device (the BIOS is set to AHCI - I probably should have mentioned that).

    Read the article

  • How to grep (or find) on cPanel?

    - by San
    How can I search for a specific string (function name or a variable name) in my files which are in various directories under cPanel file manager? I have been using a library directory and functions on that directory are used in various apps and pages. Now, I am in a situation to change something in the library file, for which I need to know the impact on files which use this library file functions. How to search / find / grep through the files hosted?

    Read the article

  • How to backup/restore full-disk encryption ubuntu 11.10?

    - by ggc
    How to backup/restore full-disk encryption ubuntu 11.10? I would like to put the RAW encrypted file system and restore on another computer. Encryption Details: crypt setup via Ubuntu alterate CD Installer only thing unencrypted is /boot File systems setup: boot- j swap-swap everything else-ext 4 Any suggestions? I have considered backing up the file system stripped of encryption, but I would prefer to keep the os encrypted while transferring. Thanks for any help!

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Mysql SELECT FOR UPDATE - strange issue

    - by Michal Fronczyk
    Hi, I have a strange issue (at least for me :)) with the MySQL's locking facility. I have a table: Create Table: CREATE TABLE test ( id int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=latin1 With this data: +----+ | id | +----+ | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | | 10 | | 11 | | 12 | +----+ Now I have 2 clients with these commands executed at the beginning: set autocommit=0; set session transaction isolation level serializable; begin; Now the most interesting part. The first client executes this query: (makes an intent to insert a row with id equal to 9) SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) Then the second client does the same: SELECT * from test where id = 9 FOR UPDATE; Empty set (0.00 sec) My question is: Why the second client does not block ? An exclusive gap lock should have been set by the first query because FOR UPDATE have been used and the second client should block. If I am wrong, could somebody tell me how to do it correctly ? The MySql version I use is: 5.1.37-1ubuntu5.1 Thanks, Michal

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • NHibernate flush should save only dirty objects

    - by Emilian
    Why NHibernate fires an update on firstOrder when saving secondOrder in the code below? I'm using optimistic locking on Order. Is there a way to tell NHibernate to update firstOrder when saving secondOrder only if firstOrder was modified? // Configure var cfg = new Configuration(); var configFile = Path.Combine( AppDomain.CurrentDomain.BaseDirectory, "NHibernate.MySQL.config"); cfg.Configure(configFile); // Create session factory var sessionFactory = cfg.BuildSessionFactory(); // Create session var session = sessionFactory.OpenSession(); // Set session to flush on transaction commit session.FlushMode = FlushMode.Commit; // Create first order var firstOrder = new Order(); var firstOrder_OrderLine = new OrderLine { ProductName = "Bicycle", ProductPrice = 120.00M, Quantity = 1 }; firstOrder.Add(firstOrder_OrderLine); // Save first order using (var tx = session.BeginTransaction()) { try { session.Save(firstOrder); tx.Commit(); } catch { tx.Rollback(); } } // Create second order var secondOrder = new Order(); var secondOrder_OrderLine = new OrderLine { ProductName = "Hat", ProductPrice = 12.00M, Quantity = 1 }; secondOrder.Add(secondOrder_OrderLine); // Save second order using (var tx = session.BeginTransaction()) { try { session.Save(secondOrder); tx.Commit(); } catch { tx.Rollback(); } } session.Close(); sessionFactory.Close();

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • Can I add a condition to CakePHP's update statement?

    - by Don Kirkby
    Since there doesn't seem to be any support for optimistic locking in CakePHP, I'm taking a stab at building a behaviour that implements it. After a little research into behaviours, I think I could run a query in the beforeSave event to check that the version field hasn't changed. However, I'd rather implement the check by changing the update statement's WHERE clause from WHERE id = ? to WHERE id = ? and version = ? This way I don't have to worry about other requests changing the database record between the time I read the version and the time I execute the update. It also means I can do one database call instead of two. I can see that the DboSource.update() method supports conditions, but Model.save() never passes any conditions to it. It seems like I have a couple of options: Do the check in beforeSave() and live with the fact that it's not bulletproof. Hack my local copy of CakePHP to check for a conditions key in the options array of Model.save() and pass it along to the DboSource.update() method. Right now, I'm leaning in favour of the second option, but that means I can't share my behaviour with other users unless they apply my hack to their framework. Have I missed an easier option?

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >