Search Results

Search found 30316 results on 1213 pages for 'read the javadoc'.

Page 134/1213 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • How do I change file protections running XP on a disk from Windows Server?

    - by cdkMoose
    I had a Windows Server 2003 machine running at home, along with my desktop which I use for development. Server went belly up, but since my desktop is reasonably powerful, I figured I would move the disk from the file server (it was OK) into my XP machine to keep all of the files. Disk comes up fine and shows all of the files. I have been getting access denied errors when trying to work with some of the files. When I display attributes in Explorer, none of them are marked Read-Only. When I view properties on the directories, the Read-Only checkbox is not checked, but has a green background(which I thought meant mixed usage for files in the directory). When I click on the checkbox to clear it and click Apply, the disk does some work and all looks well. However, I continue to get the Access Denied errors, the files still don't show any Read-Only attribute and the directory properties shows the green background again on the Read-Only checkbox. I did check the box which says to apply the change to the folder and all files /subfilders under it. I am assuming that the issue relates to userids/permissions carried over from the Server install. So, why does it let me think I can change the attribute when I can't and how can I correct this problem so that the disk correctly recognizes the ids from XP?

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Can ZFS ACL's be used over NFSv3 on host without /etc/group?

    - by Sandra
    Question at the bottom. Background My server setup is shown below, where I have an LDAP host which have a group called group1 that contains user1, user2. The NAS is FreeBSD 8.3 with ZFS with one zpool and a volume. serv1 gets /etc/passwd and /etc/group from the LDAP host. serv2 gets /etc/passwd from the LDAP host and /etc/group is local and read only. Hence it doesn't not know anything about which groups the LDAP have. Both servers connect to the NAS with NFS 3. What I would like to achieve I would like to be able to create/modify groups in LDAP to allow/deny users read/write access to NFS 3 shared directories on the NAS. Example: group1 should have read/write to /zfs/vol1/project1 and nothing more. Question The problem is that serv2 doesn't have a LDAP controlled /etc/group file. So the only way I can think of to solve this is to use ZFS permissions with inheritance, but I can't figure out how and what the permissions I shall set. Does someone know if this can be solved at all, and if so, any suggestions? +----------------------+ | LDAP | | group1: user1, user2 | +----------------------+ | | | |ldap |ldap |ldap | v | | +-----------+ | | | NAS | | | | /zfs/vol1 | | | +-----------+ | | ^ ^ | | |nfs3 |nfs3| v | | v +-----------------------+ +----------------------------+ | serv1 | | serv2 | | /etc/passwd from LDAP | | /etc/passwd from LDAP | | /etc/group from LDAP | | /etc/group local/read only | +-----------------------+ +----------------------------+

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • How to use GWT 2.1 Data Presentation Widgets

    - by Caffeine Coma
    At the 2010 Google IO it was announced that GWT 2.1 would include new Data Presentation Widgets. 2.1M is available for download, and presumably the widgets are included, but no documentation has yet surfaced. Is there a short tutorial or example for how to use them? I've seen a rumor that CellList and CellTable are the classes in question. The Javadoc for them is riddled with lots of TODOs, so quite a bit is still missing in terms of usage.

    Read the article

  • json-simple API

    - by cp
    Hello Is there a javadoc for json-simple? I am trying to make a JSONObject from an existing map among other calls and this trial by casting is getting too much, grin. thx.

    Read the article

  • Managing media using the Android MediaStore

    - by hgpc
    I manage the media (images, sound) of my app directly, reading and saving to the SD card. Should I be using the MediaStore instead? I'm not quite sure what the MediaStore is for, and the javadoc is not very helpful. When should an app use the MediaStore?

    Read the article

  • JFace: difference between ITreeContentProvider and ILazyTreeContentProvider

    - by Alexey Romanov
    After reading JavaDoc for ILazyTreeContentProvider and Virtual Tables and Trees I am a bit confused. Do they really mean that with a simple ITreeContentProvider all elements have to be loaded when the tree is created? I expected that getChildren() would only be called when expanding an element (and hasChildren() to be called to determine whether the plus sign should be shown). Or are they intended for the case where some elements have many children?

    Read the article

  • Java LockSupport Memory Consistency

    - by Lachlan
    Java 6 API question. Does calling LockSupport.unpark(thread) have a happens-before relationship to the return from LockSupport.park in the just-unparked thread? I strongly suspect the answer is yes, but the Javadoc doesn't seem to mention it explicitly.

    Read the article

  • Using com.sun.net.httpserver.HttpServer for comet/cometd

    - by Paul
    Hi Wayne :) I want to use HttpServer, which you mentioned, and I think is cool to do comet/cometd. I am wondering how tough it is to do it so that I can take the waiting connections off the thread and into some waiting queue. Also, am I correct in that it looks like it is using nio? Also, is there any better examples? I always get caught up in the terminology that the javadoc uses... Thanks :)

    Read the article

  • Pear PHP UML Class Diagrams

    - by bigstylee
    Hi, I am trying to create graphical representations of existing code. I have tried to use VS.PHP (with Visual Studios 2010) but cant seem to generate class diagrams from this. I have tried to use Pear's PHP UML package which has produced a lot of JavaDoc style documentation and an XMI document. From what I have read, this can be used to create class diagrams? If so, how? Are there other "easier" alternatives? Thanks in advance

    Read the article

  • SQL Server JDBC Exception

    - by user267581
    When using ANT to build my Java application I keep getting this error. I have tried multiple times to use SQLJDBC.JAR and SQLJDBC4.JAR but continually receive this error message. I am completely stumpped why this error is received even after upgrading to sqljdbc4.jar. [javadoc] java.lang.UnsupportedOperationException: Java Runtime Environment (JRE) version 1.6 is not supported by this driver. Use the sqljdbc4.jar class library, which provides support for JDBC 4.0.

    Read the article

  • HttpOnly cookies on google app engine java

    - by Spines
    Anyone know how I can use httponly cookies for sessions and cookies on the app engine? In the javadoc for the Cookie class, http://java.sun.com/javaee/6/docs/api/javax/servlet/http/Cookie.html#setHttpOnly(boolean) , there is a setHttpOnly method. I get a compiler error when trying to use it when developing for app engine though. The method was introduced in the Servlet 3.0 spec, so its pretty new.

    Read the article

  • FindBugs: "may fail to close stream" - is this valid in case of InputStream?

    - by thSoft
    In my Java code, I start a new process, then obtain its input stream to read it: BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream())); FindBugs reports an error here: may fail to close stream Pattern id: OS_OPEN_STREAM, type: OS, category: BAD_PRACTICE Must I close the InputStream of another process? And what's more, according to its Javadoc, InputStream#close() does nothing. So is this a false positive, or should I really close the input stream of the process when I'm done?

    Read the article

  • Naming convention when casually referring to methods in Java

    - by polygenelubricants
    Is there a Java convention to refer to methods, static and otherwise, any specific one or the whole overload, etc? e.g. String.valueOf - referring to all overloads of static valueOf String.valueOf(char) - specific overload, formal parameter name omittable? String.split - looks like a static method, but actually an instance method Maybe aString.split is the convention? String#split - I've seen this HTML anchor form too, which I guess is javadoc-influenced Is there an authoritative recommendation on how to clearly refer to these things?

    Read the article

  • Why Joda DateTimeFormatter cannot parse timezone names ('z')

    - by dimitrisli
    From DateTimeFormatter javadoc: Zone names: Time zone names ('z') cannot be parsed. Therefore timezone parsing like: System.out.println(new SimpleDateFormat("EEE MMM dd HH:mm:ss z yyyy").parse("Fri Nov 11 12:13:14 JST 2010")); cannot be done in Joda: DateTimeFormatter dtf = DateTimeFormat.forPattern("EEE MMM dd HH:mm:ss z yyyy"); System.out.println(dtf.parseDateTime("Fri Nov 11 12:13:14 JST 2010")); //Exception in thread "main" java.lang.IllegalArgumentException: Invalid format: "Fri Nov 11 12:13:14 JST 2010" is malformed at "JST 2010" //at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:673)

    Read the article

  • Is SecureRandom thread safe?

    - by Yishai
    Is SecureRandom thread safe? That is, after initializing it, can access to the next random number be relied on to be thread safe? Examining the source code seems to show that it is, and this bug report seems to indicate that its lack of documentation as thread safe is a javadoc issue. Has anyone confirmed that it is in fact thread safe?

    Read the article

  • In Java What is the guaranteed way to get a FileLock from FileChannel while accessing a RandomAcces

    - by narasimha.bhat
    I am trying to use FileLock lock(long position, long size,boolean shared) in FileChannel object As per the javadoc it can throw OverlappingFileLockException. When I create a test program with 2 threads lock method seems to be waiting to acquire the lock (both exclusive and non exclusive) But when the number threads increases in the acutal scenario over lapping file lock exception is thrown and processing slows down due the block at File lock table. What is the best way to acquire lock avoiding the OverlappingFileLockException ?

    Read the article

  • MSDN: How can I see what inherits/implements a class/interface?

    - by d03boy
    One thing I really, really miss from Javadoc is the ability to see which classes inherit the class you're looking at. So if you are looking at an abstract class (such as List) then you would be able to see all classes that inherit/implement the class/interface you're looking at. Is this available in the MSDN and I'm just missing it or is this really a missing feature?

    Read the article

  • Java Deflater strategies - DEFAULT_STRATEGY, FILTERED and HUFFMAN_ONLY

    - by Keyur
    I'm trying to find a balance between performance and degree of compression when gzipping a Java webapp response. In looking at the Deflater class, I can set a level and a strategy. The levels are self explanatory - BEST_SPEED to BEST_COMPRESSION. I'm not sure regarding the strategies - DEFAULT_STRATEGY, FILTERED and HUFFMAN_ONLY I can make some sense from the Javadoc but I was wondering if someone had used a specific strategy in their apps and if you saw any difference in terms of performance / degree of compression. Thanks, Keyur

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >