Search Results

Search found 30713 results on 1229 pages for 'matthew read'.

Page 156/1229 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Apple Mail clones Gmail account folders and gets out of sync when tracking unread emails

    - by Petruza
    The Gmail (fc.mm.mp.lh is Gmail also) accounts that I've set up with Mail, automatically created a second folder for each of the accounts, the ones you can see in ALL CAPS at the bottom. I guess this folders represent the web mail accounts, while the folders inside Inbox represent the pop accounts, despite them being the same account. The thing is, as you can see, while the inbox accounts have no unread mails, their "all caps" counterparts show as if they had some unread mails. This is not the normal behavior; when I mark an email as read, it is "read" in both versions of the account, but from time to time, they kind of get "out of sync" and the bottom folders start to show unread emails that were actually read. Have you seen this behavior before? What can I do? I don't use the bottom "folders" but I can't get rid of them anyway. It's just that their unread messages notification annoys me because there aren't actually any unread mails.

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • Anonymous user permission issue in SharePoint Server

    - by George2
    Hello everyone, I am using SharePoint Server 2007 x64 and Windows Server 2008 x64. I have setup a site with template publishing portal. I have grant anonymous access to all the site. My question is, if I create a new page, how to grant permission to anonymous user to access (read permission) the page? I got this confusion because, for the permission setting of a page, a permission is set according to a user name (e.g. read permission for user "foo" of a page). Since anonymous user does not have a related "user name", how could I grant read permission to anonymous user? BTW: I use Windows Forms authentication and Windows NTLM authentication for my sites. Thanks in advance, George

    Read the article

  • Trying to find the life expectancy of an unused flash card like SD

    - by wsams
    I read in the What's the life expectancy of an SD card? post SD cards are rated to hold data at something like 10 years sitting idle. I recall reading (not sure where) about re-energizing cards by occasionally inserting into a reader. Everything I read rates in read/write cycles and not physical decay. I'm wonder if buying a new sd card for every photo shoot would be beneficial if I could store the cards in a lock box? I was hoping for something much longer. Does anyone else agree with 10 years or maybe something more?

    Read the article

  • What would happen in a Software Raid 1 of one HDD and one SSD?

    - by Adrian Grigore
    Hi, I'm running my Windows 7 installation and all of my apps from an SSD for performance reasons. Since SSD's can instantly die at any moment, I'm looking for some kind of data backup strategy. Right Now I regularly backing up the drive image on a hard disk, but that only happens once per day, which is not enough for my taste. So I got an idea: What if I created a software raid 1 of the SSD and partition on my Hard disk? All data would be mirrored on both drives, making this a lot safer. But what about performance? Will Windows 7 detect that the SSD is faster than the hard drive and always read from the SSD? Or will it randomly read from both, thus reducing read performance? Thanks, Adrian Edit: I just found this article which basically answers my question. Feel free to close this post.

    Read the article

  • RAID Array performance on an HP Proliant ML350 G5 Smart Array E200i

    - by Nate Pinchot
    We have a client who is complaining about performance of an application which utilizes an MS SQL database. They do not believe the performance issues are the fault of the application itself. The Smart Array E200i RAID controller has 128MB cache and we have the cache set to 75% read/25% write. The disk array set to enable write caching. Recently we ran a disk performance test using SQLIO based on this guide. We used a 10 GB file for the test found that the average sequential read rate was ~60 MB/sec (megabytes/sec) and the average random read rate was ~30 MB/sec. Are these numbers on par for what the server should be performing? Better than on par? Horrible? Amazing?

    Read the article

  • setting default permissions for each folders & files in mac osx

    - by sagar
    Suppose, I have created a new folder. By default, there is "no access" to every one. By default, to other "read only" to other users. and only owners have "read & write" access to folders. But I want to apply the "read & write" access for each user to new created folders & files by me only ? How is it possible ? Thanks in advance for sharing your knowledge. Sagar.

    Read the article

  • Create Hidden Partition on USB

    - by Francesco
    I need to split an USB flash disk into two USB drives, each one with its own drive letter, but one of these has to be hidden. In the non-hidden partition I want to place my software, and in the hidden partition I need to place some files that are required by the software in order to work. Moreover, only the software may read, write, delete or execute the files in this partition. I thought to use a little partition viewed as a CD-ROM drive, as they do in many flash drives, but this solution does not allow to write other data in a second moment, and it's visible to the user that can read the file. Obviously the software must be able to access to partition and read, write, delete or execute the content. Is there a solution to do so, possibly that work also on Linux?

    Read the article

  • Piping stream into tar on FreeBSD

    - by Casey Jordan
    I am trying to pipe a tar/gzip archive into tar to decompress it. The script I have is part of a self extracting installer, where my archive is appended to the script. This works fine on linux, and the script looks like this: export TMPDIR=`mktemp -d /tmp/selfextract.XXXXXX` echo "TEMP: $TMPDIR" ARCHIVE=`awk '/^__ARCHIVE_BELOW__/ {print NR + 1; exit 0; }' $0` tail -n+$ARCHIVE $0 | tar xz -C $TMPDIR exit 0 __ARCHIVE_BELOW__ The tar archive as a string is after the ARCHIVE_BELOW but I omitted it from here since it's huge. However, when I do this on FreeBSD I get the following error: tar: Failed to open '/dev/sa0' I read that this is because free BSD expects to read from that device by default and you can tell it to read from stdin by passing -f - like so: tail -n+$ARCHIVE $0 | tar zxf - -C $TMPDIR However, when I do this I just get the error: tar: Damaged tar archive tar: Retrying... Can anyone point out what I am doing wrong here? I need to do it this way (Via piping) for efficiency reasons. Thanks

    Read the article

  • cannot move file: "file cannot be found" (Windows XP)

    - by Steve
    I have some CR2 files in a subfolder of My Documents called My Photos on a Windows XP PC. I want to move them across a WIFI network to an external HDD attached to a Windows 7 PC. I have read/write permissions on the external HDD, as I mapped to this HDD using the Windows 7 user account. When I try to move a single CR2 file, I receive "Cannot copy IMG_3317: Cannot find the specified file. Make sure you specify the correct path and file name." If I refresh the source folder, the file is still there. It is not read only, and I have read/write access to the source file. I can view its properties. Why can't I move this file? I have been able to move similar files in the past.

    Read the article

  • Mutt seems to sync to Gmail IMAP only on quit

    - by Sergey
    I am using Mutt 1.5.20 in Mac OS X Terminal. I have a Google mail account whose mail I fetch via IMAP. I also use a Gmail notifier app to notify me of new e-mail messages. My experience with Mutt dates no later than 1 week. The trouble is this: When my Gmail notifier tells me about a new e-mail, I hit the Terminal to open Mutt. I can read the message and Mutt will mark it as read. However, Google's servers are not told that the message is read until Mutt is closed. Thus, my Gmail notifier continues to show a misleading unread count. How can I force Mutt to synchronize with IMAP without having to quit every time I finish reading my e-mail? Preferably the sync will occur instantaneously, but a periodic sync would be satisfactory as well.

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • Unable to play bluray movies or see bluray discs

    - by Jason Shultz
    I have a Optiarc BD ROM BC-5500S ATA Device and when I put bluray discs into the drive the computer doesn't read them. I hear the drive make 3 noises as if it's trying to read them. Nothing shows up in windows explorer. I am running windows 7 64bit. I can read DVDs and CDs just fine. Bluray seems to be the only problem. I have installed PowerDVD9, HP DVD Play and HP DVD Media Smart and nothing helps. My laptop is a hp dv7 1273cl. I bought it before the free upgrades to windows 7 were available so I wasn't able to get the upgrade and so HP won't assist me beyond telling me to format and reinstall Vista.

    Read the article

  • Mac and windows 7 file sharing specific user

    - by all-R
    Hi guys, I try to share a specific directory to my windows 7 computer, but I want it to use a specific user that I created on my mac to connect to it. I saw this tutorial: http://www.trickyways.com/2010/06/how-to-access-mac-files-from-windows-7/ wich is exactly what I want to do, but it ain't working. For some reason, it never prompts me for username/password when I try to connect on my mac when I'm on Windows 7. On top of that, when I set the permission "No Access" to the "Everyone" user on my mac, my windows computer simply don't see the directory. If I set the permission to "Read/Write" or "Read only" it works. I simply don't want that everyone in my workgroup to be able to read my files. I want to create specific users on my mac and share them to the persons I want... Any thoughts?

    Read the article

  • How to enlarge a .PDF document to better show it in a Kindle 6"?

    - by Gus
    I have a kindle 6". The problem is that I often read pdf files that are technical, therefore, it doesn't get converted very well to kindle's native format (for example, code blocks get messed, and things like that). When I view the pdf page, it's very small to read easily, so I have to rotate the screen to a horizontal position in order to see it better, but my page get divided. But some documents would be easy to read in vertical position if I had the chance to enlarge the font size a little bit in a external pdf editor, therefore enabling the reading in the vertical orientation. Has anybody had the same situation? Is there a solution for that?

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • How to edit and enlarge the font in a PDF document? [closed]

    - by Gus
    Possible Duplicate: How to enlarge a PDF document on Kindle? I have a kindle 6". The problem is that I often read pdf files that are technical, therefore, it doesn't get converted very well to Kindle's native format (for example, code blocks get messed, and things like that). When I watch the pdf page, it's very small to read easily, so I have to rotate the screen to a horizontal position in order to see it better, but my page get divided. But some documents would be easy to read in vertical position if I had the chance to enlarge the font size a little bit in a external pdf editor, therefore enabling the reading in the vertical orientation. Is there a way to change the font size in a pdf file?

    Read the article

  • SELinux adding new allowed samba type to access httpd_sys_content_t?

    - by Josh
    allow samba_share_t httpd_sys_content_t {read execute getattr setattr write}; allow smbd_t httpd_sys_content_t {read execute getattr setattr write}; I am taking a stab in the dark with resources I've looked at, at various places that the above policies are what I want. I basically want to allow Samba to write to my web docs without giving it free access to the operating system. I read a post by a NSA rep saying the best way was defining a new type and allowing both samba and httpd access. Setting the content to public content (public_content_rw_t) does not work without making use of some unrestrictive booleans. To state this in short, how do I allow samba to access a new type?

    Read the article

  • How can I get vim to set an ACL on its swap files?

    - by thsutton
    I use vim on an OS X Snow Leopard Server machine. A number of the directories I work in have ACLs (so that various groups of users can access them over AFP) that are inherited. For some reason, when I'm working in one of these directories, vim cannot read it's own swap files. It can create them fine but can't read them which, for some reason, makes it display the "swap file already exists" message (and no, the swap file does not already exist). vim -r lists the newly created swap file as "[cannot be read]". The owner and group are correct and the permissions are 0600, and the ACLs on the swap file and the file I'm editing are identical (as disclosed by ls -le and compared with diff). groups returns the same thing whether invoked from my login shell or via :! in vim. Has anyone encountered (and hopefully resolved) a problem like this before?

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • Unmounted root partition

    - by Jack
    My server running Debian lenny has just had a power cut recently and its come back up with the root partition in read only mode. I tried to remount the filesystem in read write mode with mount -n -o remount,rw / which then gave the output mount: block device /dev/hda1 is write-protected, mounting read-only. But now the root filesystem isn't mounted at all so I can't run anything to mount the partition again or any other command for that matter such as shutdown because /bin/ isn't there. Is there anything I can do remotely?

    Read the article

  • OS X Snow Leopard, change file permissions on copy

    - by Francesco K
    I work with OS X, Snow Leopard and need to allow users to make copies of files (templates) located in a read-only repository for subsequent editing. The repository is located on a separate physical drive mounted to the OS X boot volume. As this is a shared computer in a school environment, all users access the machine via a single login ("user_local"). Whether using POSIX permissions or ACLs, the use case requires the file permissions to change from "read" to "read write" as they get copied to the "user_local" home directory. Googling around has not yielded anything that would indicate that this is possible via the Snow Leopard permission system. Question 1: Is this in fact possible via the permission system? If so, how? Question 2: If not possible, how would one go about solving this problem? I imagine this to be a fairly common use case so there must be a workable solution for it out there. Thanks.

    Read the article

  • How to configure mysqldump to avoid max_allowed_packet error

    - by Leopd
    Honestly it baffles me that with a completely default installation of mysql if I run mysqldump with default parameters it generates a SQL file that can't be imported into another completely default installation of mysql. From what I can gather it's got something to do with the max_allowed_packet setting and/or the net_buffer_length setting. I've read a bunch about this, and tried tweaking it a bunch of ways on both the export and import sides, but it still doesn't work. I keep getting the packet too big error on import. From everything I've read, here's my best guess: mysqldump --net_buffer_length=50000 myschema > giant_file.sql Because I read here that mysqldump refers to max_allowed_packet as net_buffer_length because ... uhh ... anyway. Then to import mysql --max_allowed_packet=999999 myschema < giant_file.sql But this still doesn't work. How do I export / import the database???

    Read the article

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >