Search Results

Search found 30115 results on 1205 pages for 'read uncommitted'.

Page 15/1205 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • how to check the read write status of storing media in python

    - by mukul sharma
    Hi All, How can i check the read/ write permission of the file storing media? ie assume i have to write some file inside a directory and that directory may be available on read only media like (cd or dvd)or etc. So how can i check that storing media ( cd, hard disk) having a read only or read write both permission. I am using windows xp os. Thanks.

    Read the article

  • how to identify process in kernel read func without using current->pid

    - by yoavstr
    my lecture wants us to build module where we need to identify each read process and where the same read process called twice on the same writer massage we should insert him to an queue who's we wake up when all readers have read I achieved this goal by by using list of pid's and boolean read/not_read inside each node but he decided to be nasty and require us to it with some argument from FILE struct can you please help me ?....

    Read the article

  • skipping a variable using while read in bash

    - by Aleksandar Ivanisevic
    i'm reading a few variables from a file using while read a b c; do (something) done < filename is there an elegant way to skip a variable (read in an empty value), i.e. if I want a=1 b= c=3, what should I write in the file? Right now i'm putting 1 "" 3 and then use b=$(echo $b | tr -d \" ) but this is pretty cumbersome, IMHO any ideas?

    Read the article

  • Mysql- “FLUSH TABLES WITH READ LOCK” started automatically

    - by mingyeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • Centos 6.3 PERL CGI selinux file read access

    - by Steed
    I have a CGI script called index.cgi It is trying to read a log file called 10.128.0.242.2012.sep.20.downloaded.txt under the path /var/log/trafcount/ It appears that it is being blocked by selinux. The audit log shows something like type=AVC msg=audit(1348158321.873:1472116): avc: denied { read } for pid=11620 comm="index.cgi" name="10.128.0.242.2012.sep.20.downloaded.txt" dev=dm-0 ino=395264 scontext=unconfined_u:system_r:httpd_sys_script_t:s0 tcontext=unconfined_u:object_r:var_log_t:s0 tclass=file How can I allow this script full access to all files under /var/log/trafcount ?

    Read the article

  • Very poor read performance compared to write performance on md(raid1) / crypt(luks) / lvm

    - by Android5360
    I'm experiencing very poor read performance over raid1/crypt/lvm. In the same time, write speeds are about 2x+ faster on the same setup. On another raid1 setup on the same machine I get normal read speeds (maybe because I'm not using cryptsetup). OS related disks: sda + sdb. I have raid1 configuration with two disks, both are in place. I'm using LVM over the RAID. No encryption. Both disks are WD Green, 5400 rpm. IO test results on this raid1: dd if=/dev/zero of=/tmp/output.img3 bs=8k count=256k conv=fsync - 2147483648 bytes (2.1 GB) copied, 22.3392 s, 96.1 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=/tmp/output.img3 of=/dev/null bs=8k - 2147483648 bytes (2.1 GB) copied, 15.9 s, 135 MB/s And here is the problematic setup (on the same machine). Currently I have only one sdc (WD Green, 5400rpm) configured in software raid1 + crypt (luks, serpent-xts-plain) + lvm. Tomorrow I will attach another disk (sdd) to complete this two-disk raid1 setup. IO tests results on this raid1: dd if=/dev/zero of=output.img3 bs=8k count=256k conv=fsync 2147483648 bytes (2.1 GB) copied, 17.7235 s, 121 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=output.img3 of=/dev/null bs=8k 2147483648 bytes (2.1 GB) copied, 36.2454 s, 59.2 MB/s We can see that the read performance is very very bad (59MB/s compared to 135MB/s when using no encryption). Nothing is using the disks during benchmark. I can confirm this because I checked with iostat and dstat. Details on the hardware: disks: all are WD green, 5400rpm, 64mb cache. cpu: FX-8350 at stock speed ram: 4x4GB at 1066Mhz. Details on the software: OS: Debian Wheezy 7, amd64 mdadm: v3.2.5 - 18th May 2012 LVM version: 2.02.95(2) (2012-03-06) LVM Library version: 1.02.74 (2012-03-06) LVM Driver version: 4.22.0 cryptsetup: 1.4.3 Here is how I configured the slow raid1+crypt+lvm setup: parted /dev/sdc mklabel gpt type: ext4 start: 2048s end: -1 Now the raid, crypt and the lvm configuration: mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdc cryptsetup --cipher serpent-xts-plain luksFormat /dev/md1 cryptsetup luksOpen /dev/md1 md1_crypt vgcreate vg_sql /dev/mapper/md1_crypt lvcreate -l 100%VG vg_sql -n lv_sql mkfs.ext4 /dev/mapper/vg_sql-lv-sql mount /dev/mapper/vg_sql-lv_sql /sql So guys, can you help me identify the reason and fix it? It has to be something with the cryptsetup as there is no such read slowdown on the other setup (sda+sdb) where no encryption is present. But I have no idea what to do. Thanks!

    Read the article

  • SQL Server 2012 Read-Only Replica physical requirements

    - by Ddono25
    We are going to be setting up two replicas of our DataMart and related databases, and our plan includes using Hyper-V VM's to handle the load. When creating the VM's, we cannot find specific requirements or recommendations for RAM/CPU power for read-only replicas. Our current Primary DataMart setup has 64GB RAM and Two Quad-Core procs, which so far has been adequate for our usage. What should be the server setup for replicas and SQL Server to adequately support the read-only usage?

    Read the article

  • Slow SSD Read on Server 2008 SP2

    - by Ruben L.
    Im using a Intel X25-M 80G with the latest firmware. on XP /w using IDE and WIN7 using AHCI I get read speeds up to 250MB/S. But when running it with Server 2008 SP1 or SP2 on AHCI, I get read speeds around 180MB/S. Ive updated drivers for 2008, tested with writecache on/off. Any input would be appreciated. thanks!

    Read the article

  • How to clear Windows disk read cache?

    - by Sebastiaan Megens
    For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Konsole Read/Write Access to PTY Device

    - by Carmen
    When I run something like: $ konsole -e "sleep 30" I get this message Konsole is unable to open a PTY (pseudo teletype). It is likely that this is due to an incorrect configuration of the PTY devices. Konsole needs to have read/write access to the PTY devices. But if I try to run this: $ gnome-terminal -e "sleep 30" Then it is fine, and it does not throw any error. How can I change the read/write access of the Konsole to the PTY devices?

    Read the article

  • question about a sata read/write speed

    - by Joe
    I'm contemplating biting the bullet for an ssd drive in a server, obviously I know it will be MUCH faster than a regular 7200rpm sata2 3gbps drive. The nice thing about ssd's is often they post the read/write speed but that info isn't available for sata's so I'm just curious to know what a typical read/write speed would be for a seagate 120gb 7200rpm drive. I know it fluctuates from manufacturer and model series but I'm just looking for a guestimate.

    Read the article

  • Read DataTable by RowState

    - by RBrattas
    Hi, I am reading my DataTable as follow: foreach ( DataRow o_DataRow in vco_DataTable.Rows ) { //Insert More Here } It crash; because I insert more records. How can I read my DataTable without reading the new records? Can I read by RowState? Thank you for your excellence work, Rune

    Read the article

  • Linux Device Driver - what's wrong with my device_read()?

    - by Rob
    My device /dev/my_inc is meant to take a positive integer N represented as an ascii string, and store it. Any read from /dev/my_inc will produce the ascii string representation of N + 1. The problem is that when I cat /dev/my_inc, I only get the first byte of myinc_value output to my shell, even though I have bytes_read == 2 at the end of my loop. /* Read from the device *********************/ static ssize_t device_read(struct file *filp, char *buffer, size_t length, loff_t *offset) { char c; int bytes_read = 0; int value = myinc_value + 1; printk(KERN_INFO "my_inc device_read() called with value %d and msg %s.\n", value, msg); /* No bytes read if pointer to 0 */ if (*msg_ptr == 0) return 0; /* Put the incremented value in msg */ snprintf(msg, MAX_LENGTH, "%d", value); /* Put bytes from msg into user space buffer */ while (length && *msg_ptr) { c = *(msg_ptr++); printk(KERN_INFO "%s device_read() read %c.", DEV_NAME, c); if(put_user(c, buffer++)) return -EFAULT; length--; bytes_read++; } printk("my_inc device_read() returning %d.\n", bytes_read); /* Return bytes copied */ return bytes_read; }

    Read the article

  • Getting EOFException while trying to read from SSLSocket

    - by Isac
    Hi, I am developing a SSL client that will do a simple request to a SSL server and wait for the response. The SSL handshake and the writing goes OK but I can't READ data from the socket. I turned on the debug of java.net.ssl and got the following: [..] main, READ: TLSv1 Change Cipher Spec, length = 1 [Raw read]: length = 5 0000: 16 03 01 00 20 .... [Raw read]: length = 32 [..] main, READ: TLSv1 Handshake, length = 32 Padded plaintext after DECRYPTION: len = 32 [..] * Finished verify_data: { 29, 1, 139, 226, 25, 1, 96, 254, 176, 51, 206, 35 } %% Didn't cache non-resumable client session: [Session-1, SSL_RSA_WITH_RC4_128_MD5] [read] MD5 and SHA1 hashes: len = 16 0000: 14 00 00 0C 1D 01 8B E2 19 01 60 FE B0 33 CE 23 ..........`..3.# Padded plaintext before ENCRYPTION: len = 70 [..] a.j.y. main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] Padded plaintext before ENCRYPTION: len = 70 [..] main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] main, received EOFException: ignored main, called closeInternal(false) main, SEND TLSv1 ALERT: warning, description = close_notify Padded plaintext before ENCRYPTION: len = 18 [..] main, WRITE: TLSv1 Alert, length = 18 [Raw write]: length = 23 [..] main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) The [..] are the certificate chain. Here is a code snippet: try { System.setProperty("javax.net.debug","all"); /* * Set up a key manager for client authentication * if asked by the server. Use the implementation's * default TrustStore and secureRandom routines. */ SSLSocketFactory factory = null; try { SSLContext ctx; KeyManagerFactory kmf; KeyStore ks; char[] passphrase = "importkey".toCharArray(); ctx = SSLContext.getInstance("TLS"); kmf = KeyManagerFactory.getInstance("SunX509"); ks = KeyStore.getInstance("JKS"); ks.load(new FileInputStream("keystore.jks"), passphrase); kmf.init(ks, passphrase); ctx.init(kmf.getKeyManagers(), null, null); factory = ctx.getSocketFactory(); } catch (Exception e) { throw new IOException(e.getMessage()); } SSLSocket socket = (SSLSocket)factory.createSocket("server ip", 9999); /* * send http request * * See SSLSocketClient.java for more information about why * there is a forced handshake here when using PrintWriters. */ SSLSession session = socket.getSession(); [build query] byte[] buff = query.toWire(); out.write(buff); out.flush(); InputStream input = socket.getInputStream(); int readBytes = -1; int randomLength = 1024; byte[] buffer = new byte[randomLength]; while((readBytes = input.read(buffer, 0, randomLength)) != -1) { LOG.debug("Read: " + new String(buffer)); } input.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } I can write multiple times and I don't get any error but the EOFException happens on the first read. Am I doing something wrong with the socket or with the SSL authentication? Thank you.

    Read the article

  • how to read a User uploaded file, without saving it to the database

    - by GoodGets
    I'd like to be able to read an XML file uploaded by the user (less than 100kb), but not have to first save that file to the database. I don't need that file past the current action (its contents get parsed and added to the database; however, parsing the file is not the problem). Since local files can be read with: File.read("export.opml") I thought about just creating a file_field for :uploaded_file, then trying to read it with File.read(params[:uploaded_file]) but all that does is throw a TypeError (can't convert HashWithIndifferentAccess into String). I really have tried a lot of various things (including reading from the /tmp directory as well), but could get none of them to work. I hope the brevity of my question doesn't mask the effort I've given to try to solve this on my own, but I didn't want to pollute this question with a hundred ways of how NOT to get it done. Big thanks to anyone who chimes in.

    Read the article

  • Need some help understanding IO Statistics

    - by Abe Miessler
    I have a query that has a very costly INDEX SEEK operation in the execution plan. In order to track down the cause i set IO STATISTICS on and ran it. In the problem section it gave the following statistics: Table '#TempStudents_Enrollment2_____________________________________000000004D5F'. Scan count 0, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TempRace2______________________________________________000000004D58'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefRace'. Scan count 120, logical reads 240, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefFedEnctyRaceCatg'. Scan count 18, logical reads 36, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#43B0BA0F'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#42BC95D6'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#41C8719D'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#40D44D64'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LEA2_________________________________________________000000004D56'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#39332B9C'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#School2________________________________________________000000004D57'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GenderKey______________________________________________000000004D5A'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LangAcqKey_____________________________________________000000004D5B'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TransferCatKey___________________________________________000000004D5C'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#ResCatKey______________________________________________000000004D5D'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuPgm_Denorm'. Scan count 2344954, logical reads 4992518, physical reads 16, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#3FE0292B'. Scan count 1, logical reads 2344954, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuEnrlmt_Denorm'. Scan count 20, logical reads 87679, physical reads 0, read-ahead reads 87425, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GradeKey_______________________________________________000000004D59'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. What should I look for in here when i'm looking to improve the performance? The line with over 2 million for the Scan count looked suspicious to me but I really don't know. Does anyone see anything here that i should look into in more detail?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Have only read access to Samba

    - by Tahir Malik
    Hi I've been struggling a lot with Samba on Centos 5.5 lately. I develop in Windows 7 and send files through scp (ant task), but it's to slow and wanted to setup thoroughly samba. After installing and following some guides I've done the following: Disable firewall (iptables) Disable SelLinux (didn't do that at the start, but didn't help either) setup my smbusers file to map my windows user to root (root = "Tahir Malik" -- works) added a current user mitco to the sambapassdb with the command smbpasswd -a mitco , because the windows user had only read access So both the users have read access to my share. Here is my smb.conf snippit: [global] workgroup = MITCO server string = Samba Server Version %v netbios name = centos ; interfaces = lo eth0 192.168.12.2/24 192.168.13.2/24 ; hosts allow = 127. 192.168.12. 192.168.13. [alf4] comment = Alfresco 4 path = /opt read only = no valid users = mitco, mitco force user = root force group = root admin users = mitco , mitco writeable = yes ; browseable = yes What also maybe important is that the /opt is only writable by root, but that shouldn't matter because I use the force user and group or admin users. The log file : [2012/09/29 07:43:44, 0] smbd/server.c:main(958) smbd version 3.0.33-3.39.el5_8 started. Copyright Andrew Tridgell and the Samba Team 1992-2008 [2012/09/29 07:43:59, 1] smbd/service.c:make_connection_snum(1085) mitco-tahir (192.168.13.1) connect to service alf4 initially as user root (uid=0, gid=0) (pid 5228)

    Read the article

  • Remote Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user39891
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Sharepoint 2007: Disabling Edit/Read Only mode?

    - by TheGambler
    If I open a doc in read only mode I'm able to press save and then it opens up a save as box and the default directory is the directory on the sharepoint server and if you press save you save it to the server. This actually makes the whole process not really "read only" mode since I could actually update the document. Is there a way to prevent this from happening so that if someone chooses read only there is no way possible to updload any changes back to the sharepoint site? Also, it has been suggested as a solution to get rid of the edit/read only option so that people have to check out the document. Is there a way to remove the edit/read only option on documents?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >