Search Results

Search found 4355 results on 175 pages for 'filesystem notification'.

Page 139/175 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • How can ICS in Windows 7 be managed via command line, scripts, config files, etc.?

    - by Skya
    I've been using ICS successfully for years, but now I'm looking for a way to control it through something else than the GUI in Control Panel\Network and Internet\Network Connections - Connection Properties: I want to do everything that the encircled checkbox does, without touching the GUI. But what does the checkbox do? Microsoft don't provide specific information and the most helpful forum post I've found is from 2003. Assuming that some of the advice is still valid, I've come to the conclusion that ICS is broken down into 6 parts that have to be set up individually: the sharedAccess service interface settings firewall rules a static route dnsproxy autodhcp I've already learned that the service can be started/stopped with the command net start/stop sharedAccess and that netsh is a good tool for changing the interface settings and the firewall rules. But I don't understand how ICS handles routing and DNS. All hosts in my network are configured statically, so I don't care much about autodhcp. Thanks for your help! EDIT: I've spent the whole day scanning through ProcMon and I've seen reads/writes to both the registry and the filesystem and it is difficult to determine what parts of it actually make ICS work. I'm trying to look for an API instead. I'm looking into this right now, but I still want to know more about the inner workings.

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • Where is my free space?

    - by Andrey
    A week ago I got a low disk space warning on my Vista x64 Ultimate box - 60 Mb free on the disk C; I cleaned up some downloaded msdn images and got 20 Gb freed up. Three days ago I got another notification, it looked suspicious but I didnt have time to deal with it and just moved some heavy stuff to another drive to free up about 17 Gb.... Today morning - 53Mb left on drive C, again! Now it looks really suspecious, so I downloaded TreeSize to see what's taking up the space, just to see it reporting only 121 GB out of 200 GB used, in other words I suppose to have about 79 Gb free. Then I went to Folder Options, enabled viewing of system and hidden files, rerun teh tool to see another 5 Gb added (which is expected). Then I open disk C in windows explorer, select all and right click Properties, to see it reporting teh same amount of files - 126 Gb. But when I look at Drive C properties, it reports that 200GB of 200 Gb are taken. I just scanned the drive with two different antiviruses - Symantec and AVG and found no viruses... I'm a little confused at this point, any ideas where is my free space, woudl be highly appreciated! Thank you! Andrey

    Read the article

  • Router gets disconnected once I terminate my SIP application

    - by TacB0sS
    Hey, Here is an interesting one, I have a SIP VoIP application which is able to register to the PBX server, and I can invite and see the user call on the callee end receiving an Invite, and on the caller end I see the Ringing response... now here is interesting part, if I close my application with out any notification to the server my router disconnects and restart, after a short while (30 - 150 sec). I could fix that if I would complete the ACK BYE process, but I'm just wondering why does my router hangs up? any ideas? My Router is TNN-Siemens SL2-141, thought this might matter Update: this is what I found: SIP ALG allows two or more simultaneous VoIP phone calls made by VoIP clients through this router. which means that if I disable it I would not be able to do the testing I'm trying so badly to do, and since I don't have access to another router, I must handle it with the bug then... I can say that this never happened to me with one user connecting, but then again I didn't have anyone to invite then, I received from the SIP UAS 503 when I tried to invite an imaginary user. This bug only occur after I connected the second SIP UAC and invited it and closed the application. Adam.

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • Using dropbox / symbolic link combo successfully

    - by wim
    In the past I have kept some files on dropbox by copying them into my ~/Dropbox folder on Ubuntu. I don't want to move the original files into Dropbox synch folder or muck around with my directory structure. Then I have found I was using dropbox more and more, and wasting a lot of space this way by duplication of data. I use a small SSD locally for OS, any other data is kept on mounted shares from my NAS. I found I could successfully get files up to the cloud by using symbolic links like: ln -s /some/mounted/share/dir ~/Dropbox/dir And dropbox would carry on and sync those files remotely whilst only using up the space of the symbolic link locally. This worked well for me for a few weeks, until I turned on my laptop one day and saw '421 files have been removed from your dropbox' notification. They were still there in the original mounted share, but the symbolic links I'd made were completely gone for some reason. What did I do wrong? It is possible the share could have become unmounted, but I didn't expect this would cause all my files to be deleted from the cloud could it? How can I 'share' files on my dropbox in this way without the danger of the originals being modified from remotely?

    Read the article

  • NTFS: Deny all permissions for all files, except where explicitly added

    - by Simon
    I'm running a sandboxed application as a local user. I now want to deny almost all file system permissions for this user to secure the system, except for a few working folders and some system DLLs (I'll call this set of files & directories X below). The sandbox user is not in any group. So it shouldn't have any permissions, right? Wrong, because all "Authenticated Users" are a member of the local "Users" group, and that group has access to almost everything. I thought about recursively adding deny ACL-entries to all files and directories and remove them manually from X. But this seems excessive. I also thought about removing "Authenticated Users" from the "Users" group. But I'm afraid of unintended side-effects. It's likely that other things rely on this. Is this correct? Are there better ways to do this? How would you limit the filesystem permissions of a (very) non-trustworthy account?

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Exchange Out of Office Reply reset

    - by Richard West
    I have a question. We have an employee that is going to be on maternity leave for the next 8 weeks. I think that Outlook/Exchange is designed to send one out of office message to each person that sends an email to my user for the duration of the out of office reply. Meaning that if someone sends an email to my user each week they are only going to receive one out of office message - the first time they send her an e-mail. My concern is that over time people might forget that she is out of the office. Since they are not receiving any type of reply when they send an email this would seem possible. Does anyone know if Exchange ever resets the out of message notification after a certain amount of time? Like a week or so? I'm not looking for every message to get an out of office message, but I think more than one over the course of 8 weeks would be appropriate. I know that I can turn off and turn back on the out of office assistant to "reset" the replies, but I'm curious if Exchange performs a reset after a certain period of time automatically.

    Read the article

  • Burn bootable iso image to USB stick using dd: Won't boot (despite USB first in boot sequence)

    - by Nicolas Raoul
    I have installed Ubuntu on a Lenovo Thinkpad R500 2732, and I must update the BIOS. On the Lenovo website, I am offered this: BIOS Update Bootable CD for Windows 7 (32-bit, 64-bit), Vista (32-bit, 64-bit), XP - ThinkPad R500 I guess a bootable CD that would do a BIOS update is indeed what I need. (still wondering why it says "Windows" though... if it is bootable should not it be OS-agnostic?) Not wanting to waste a CD, I copied the image to my USB stick: sudo dd if=/home/nico/7yuj40uc.iso of=/dev/sdb1 bs=1M And rebooted, after making sure USB is first in the boot sequence. PROBLEM: It does not boot. Did I forget one step? Details about the iso image (readme): ls -lh 7yuj40uc.iso 25M file 7yuj40uc.iso /home/nico/7yuj40uc.iso: # ISO 9660 CD-ROM filesystem data '7YUJ40US ' (bootable) (Scroll to the right: it says "bootable") UNetbootin does not work because it is not a Linux image. Some people on the Internet advise to copy the content of the ISO and do other steps. This ISO has zero ISO content so it would not work. If I mount the ISO, I can see it contains zero files.

    Read the article

  • Matlab computations done over Apple Filing Protocol (AFP) depend on POSIX permissions, ignores ACLs

    - by flumignan
    I'm a system administrator and have never used Matlab, so forgive my general ignorance of the program. My users have encountered problems when executing scripted Matlab actions over AFP to a Mac OS X Server 10.6.7 where the access control list (ACL) should allow actions, but the POSIX-style permissions disallow the activity. It seems as if Matlab, run locally on the Mac workstations on datasets on the remote server, ignores the ACLs entirely. This is the only application I've ever seen behave this way. The server's filesystem is HFS+J and all other activity is performing as expected. These users cannot use CIFS because of our integration with external directory systems. In this example, the directory bxdata, the members of the group cibturner should be able to modify the files. Indeed, they can using any other method except via Matlab scripts. When the Matlab script hits these files, the POSIX permissions of 644 disallow modification. It's as if the ACLs are irrelevant. [root@cib 16:00:24 /14181.2_5sM]# ls -leh@ bxdata/ total 128 -rw-r--r--+ 1 kel32 staff 18K Feb 15 09:31 TS-5sMath030708-21073-1.edat 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown -rw-r--r--+ 1 kel32 staff 25K Feb 15 09:31 TS-5sMath030708-21073-1.txt 0: group:cibturner inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 1: group:cibsrlocaladmins inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown 2: group:crcservergroup inherited allow read,write,execute,append,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown Because this server has HIPAA data, security is critical. We are not using networked home directories or SAN technology. The MatLab program is run on the user's hard drive; access is granted via Kerberized AFP.

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • bind9 dlz/mysql at ubuntu segfault libmysqlclient.so

    - by Theos
    I have a big problem. I installed the bind9 nameserver to three different computer. two Ubuntu 10.04.4 LTS, and one Ubuntu 11.10 I compiled it 9.7.0, 9.7.3, 9.9.0 with this method: ./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var \ --mandir=/usr/share/man --infodir=/usr/share/info \ --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static \ --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld \ --with-dlz-mysql=yes --with-dlz-bdb=no \ --with-dlz-filesystem=yes --with-geoip=/usr make make install After the set up for dlz/mysql, the BIND server is working perfetctly until 5-30 minute long. Ahter i got segfault. I resolve temporaly the problem with a simple process watchdog, and if the named is stopped, the watchdog is restart it, but this is not a good idea in long therm. My log output is: messages: Apr 13 19:33:51 dnsvm kernel: [ 8.088696] eth0: link up Apr 13 19:33:58 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.082572] named[1027]: segfault at 88 ip b71c4291 sp b5adfe30 error 4 in libmysqlclient.so.16.0.0[b714e000+1aa000] Apr 13 19:35:08 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.457510] named[1423]: segfault at 68 ip b71d6122 sp b52f0a40 error 4 in libmysqlclient.so.16.0.0[b7160000+1aa000] Apr 13 19:35:09 WATCHDOG: named not running. Restarting Apr 13 19:41:56 dnsvm kernel: [ 494.838206] named[1448]: segfault at 88 ip b731c291 sp b5436e30 error 4 in libmysqlclient.so.16.0.0[b72a6000+1aa000] Apr 13 19:41:57 WATCHDOG: named not running. Restarting Apr 13 19:57:26 dnsvm kernel: [ 1424.023409] named[2976]: segfault at 88 ip b72d1291 sp b6beee30 error 4 in libmysqlclient.so.16.0.0[b725b000+1aa000] Apr 13 19:57:26 WATCHDOG: named not running. Restarting Apr 13 20:11:56 dnsvm kernel: [ 2294.324663] named[6441]: segfault at 88 ip b7357291 sp b6473e30 error 4 in libmysqlclient.so.16.0.0[b72e1000+1aa000] Apr 13 20:11:57 WATCHDOG: named not running. Restarting syslog: http://pastebin.com/hjUyt8gN the first server is a native, normal x64 server (u1004lts), the second is virtualised server (u11.10) the third is also virtualised (10.04lts) This servers is only for dns providing with mysql server db. But the problem is be with all server, and all bind version. named.conf: http://pastebin.com/zwm1yP7V Can anybody help me, or any good idea?

    Read the article

  • Howto: SaaS / PHP Application / Tenants / Security

    - by Ben Fransen
    Hi all, Being completely new in the webhostingcorner I have a few questions on how to implement/setup a webserver for a SaaS application. I'm about to rent my own server for a new product (CMS) I'm launching in two months. Developing the system wasn't that much of wild ride to me, but a correct way to implement it, is. So lets say this is my situation: I want to host 10 websites for 8 clients. There are 6 single sites, and two clients have two websites they can manage with my software. The CMS must be placed on the server too, all clients are connecting to 1 system The database must be placed Depending on the contract a client makes, the client gets some storage. How to measure the used storage over the DB, FileSystem and email Clients may not, in any case be able to somehow get outside their directory, but from the CMS directory the CMS must be able to create files and dirs in a clients directory (for templates, imagegalleries, widgets, etc, etc). I was thinking about something like a dirstructure like this: ./CMS/ [all CMS files] ./Websites/*/ [all websites] My hostingprovider will install updates to the os (CentOS, latest) and the admin panel (Direct Admin). Is there anybody with experience on this topic? Or do you have some thoughts about it? please join the conversation since I'm completely new to this. Ben

    Read the article

  • Nginx proxy with Redmine SVN authentication.

    - by Omegaice
    I am attempting to setup a system where I have an nginx server running as a reverse proxy for multiple websites that I want to run. To separate the websites I have created a Linux container which contains each site to allow me to reduce conflicts in database usage etc. I am currently trying to get my main site working and have nginx with passenger setup and connecting to redmine and I have an Apache install specifically setup for serving the SVN over HTTP and am attempting to use the redmine authentication with that. I have set everything up as described in the redmine howtos, but when I check a project out from the SVN it always works even if the project is private and whenever I try and commit to the repositories it fails saying "Could not open the requested SVN filesystem", the Apache error log related to that event is "(20014)Internal error: Can't open file '/srv/rcs/svn/error/format': No such file or directory". If I take out the redmine authentication I can checkout and check-in repositories fine but there is no authentication. Does anyone have any ideas? Edit I tried to solve this problem another way by attempting to have the authentication work by LDAP, I managed to get it so that my user could log into the redmine website but as soon as I tried to check anything out it said that access was forbidden to the repository.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >