Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 8/127 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Directories will list, but are not recognize by cd

    - by mdimond
    New to terminal and having problems out of the gate. Using Terminal 2.1.2 on a Mac running 10.6.8. Using the "ls Documents" will list the contents, but when I try to change directories, which I tried several different ways, I get the following results: new-host-2:~ MDimond$ cd. -bash: cd.: command not found new-host-2:~ MDimond$ cd./Users/MDimond/Documents -bash: cd./Users/MDimond/Documents: No such file or directory new-host-2:~ MDimond$ cd. /Documents -bash: cd.: command not found The /usr/bin has the cd command listed; the /bin does not. Any assistance would be greatly appreciated. Thanks, md

    Read the article

  • How to compare differences between directories (linux)

    - by Phil
    I have two directories - one from earlier backup and second from newest backup. How do i compare what changes were made to files in directory from newest backup on Linux? Also how do i display changes in for example text and php files - i'm thinking about something like revision history on wikipedia where you see old version on one side of the screen and newest version on other and changes are highlighted. How do i achieve something like that? edit: How do i also compare remote dir with local?

    Read the article

  • Changing Chrome Working Directories on Mac clients

    - by Jason
    We're using Mac OSX 10.6.8 with NFS for home directories. Chrome seems to not like this, for a user who is logged in more than one machine. AFP was ok, but for other reasons we need NFS. It only allows you to start one instance. Ie, userA logged in to Mac1, Mac2. Can only have chrome open on Mac1 OR mac2, not both. I have read at this thread, changing working directory, and sym link can fix... but I can't find the details. Can someone help?

    Read the article

  • Using ant to add directories to CVS

    - by ANooBee
    How do I add a new directory into my CVS repository using Ant? From all that I've read, it appears that I have to cd to the parent directory and call the cvs command. How do I do that in Ant? I've seen approaches where an to cd is called in Ant; is that the best approach? Eg of what I am trying to do: Let's say I have a module Test_Module with directories "A", "B" and "C". Under each of these directories, there are directories for "Jan", "June", "Sept" and I want to create a "Alpha" directory under Test_Module- C - Sept. So, I create a "Alpha" directory on my local system and run the cvs add command from Root and I get the following errror: cvs add: in directory .: cvs [add aborted]: there is no version here; do 'cvs checkout' first I get the same error when I run this using Ant or from command line. Now, if I cd to the Test_Module/C/Sept directory and run "cvs add Alpha" it creates the directory and everything is fine. So, how do I do the same in Ant? Are there any ant-contrib tasks that are out there that I could possibly use or even a built-in ant task that I am missing? Thanks in Advance!!

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • How do I stop Sophos anti virus from scanning directories that are under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely the need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • VSFTP Users and Directories

    - by Mathew
    I'm stuck. I've been working all day on trying to figure out what I'm doing wrong and I've hit wall after wall. What I'm trying to do: Setup FTP in such a way that certain users have access only to their directory, but higher level users have access to all directories. What I've Googled so far: I started with this, but that didn't do what I needed it to. I then used this, but once I created one user, it wouldn't let me create another one. Finally, I decided to follow this, but it wouldn't let me even create one user. I'm using Ubuntu 10. I can login to ftp as a root user and it takes me to the home directory. If I try to login using the user I created in the tutorial it says: Status: Connection established, waiting for welcome message... Response: 220 (vsFTPd 2.2.2) Command: USER mathew Response: 331 Please specify the password. Command: PASS **** Response: 530 Login incorrect. Error: Critical error Error: Could not connect to server

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by mlauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • IIS7 - Virtual Directories' Parent Paths behaving differently than previous versions

    - by MisterZimbu
    I'm doing a migration of a web server running on IIS 5 to IIS 7. I'm noticing that the virtual directories are behaving differently between the two. I have a site located at c:\inetpub\SiteName. This site contains a virtual directory "bob" that points at c:\virtualdirs\bob. There's a script in the bob folder (script.asp) that contains just: <!--#include virtual="../index.asp"--> I'm noticing different behaviors between IIS5 and IIS7 when I attempt to run the script by going to http://SiteName/bob/script.asp: IIS5 references the parent path of the site, and imports c:\inetpub\SiteName\index.asp. IIS7 references the parent folder of the virtual directory, and looks for a c:\virtualdirs\index.asp (that doesn't exist). Doing a Response.Write of a Server.MapPath confirms this. Is there a way to get IIS7 to behave like IIS5 in this regard? Unfortunately, moving index.asp and its logic into the virtualdirs folder isn't an option as the virtual directory will be shared across many sites (with differing index.asps). Thanks.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Indirect Postfix bounces create new user directories

    - by hheimbuerger
    I'm running Postfix on my personal server in a data centre. I am not a professional mail hoster and not a Postfix expert, it is just used for a few domains served from that server. IIRC, I mostly followed this howto when setting up Postfix. Mails addressed to one of the domains the server manages are delivered locally (/srv/mail) to be fetched with Dovecot. Mails to other domains require usage of SMTPS. The mailbox configuration is stored in MySQL. The problem I have is that I suddenly found new mailboxes being created on the disk. Let's say I have the domain 'example.com'. Then I would have lots of new directories, e.g. /srv/mail/example.com/abenaackart /srv/mail/example.com/abenaacton etc. There are no entries for these addresses in my database, neither as a mailbox nor as an alias. It's clearly spam from auto-generated names. Most of them start with 'a', a few with 'b' and a couple of random ones with other letters. At first I was afraid of an attack, but all security restrictions seem to work. If I try to send mail to these addresses, I get an "Recipient address rejected: User unknown in virtual mailbox table" during the 'RCPT TO' stage. So I looked into the mails stored in these mailboxes. Turns out that all of them are bounces. It seems like all of them were sent from a randomly generated name to an alias that really exists on my system, but pointed to an invalid destination address on another host. So Postfix accepted it, then tried to redirect it to another mail server, which rejected it. This bounced back to my Postfix server, which now took the bounce and stored it locally -- because it seemed to be originating from one of the addresses it manages. Example: My Postfix server handles the example.com domain. [email protected] is configured to redirect to [email protected]. [email protected] has since been deleted from the Hotmail servers. Spammer sends mail with FROM:[email protected] and TO:[email protected]. My Postfix server accepts the mail and tries to hand it off to hotmail.com. hotmail.com sends a bounce back. My Postfix server accepts the bounce and delivers it to /srv/mail/example.com/bob. The last step is what I don't want. I'm not quite sure what it should do instead, but creating hundreds of new mailboxes on my disk is not what I want... Any ideas how to get rid of this behaviour? I'll happily post parts of my configuration, but I'm not really sure where to start debugging the problem at this point.

    Read the article

  • segfault when cd-ing into certain directories in bash

    - by user84207
    I have noticed this very strange behavior recently. After cd into certain directories, I get a segfault on the terminal. --- SIGSEGV (Segmentation fault) @ 0 (0) --- --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV (core dumped) +++ segmentation fault (core dumped) I proceeded to strace a bash session in which I cd into the target directory, and was able to reproduce the problem. I attached the log to this pastebin: I paste below the few lines from the read of "cd stumpwm", which is the directory in question, until the segfault. I included a few of the repetitions of calls to "rt_sigprocmask" and "brk" to give a glimpse of the pattern, which occurs for most of the strace, read(0, cd stumpwm "c", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "d", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, " ", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "s", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "t", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "u", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "m", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "p", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "w", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "m", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "\n", 1) = 1 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon -echo ...}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon -echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, {0x49edc0, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTERM, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGQUIT, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGALRM, {0x457f50, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x7ffff76254a0}, {0x49edc0, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTSTP, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTTOU, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGTTIN, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, {SIG_IGN, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGWINCH, {0x457920, [], SA_RESTORER, 0x7ffff76254a0}, {0x49e6e0, [], SA_RESTORER|SA_RESTART, 0x7ffff76254a0}, 8) = 0 rt_sigaction(SIGINT, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, {0x457d50, [], SA_RESTORER, 0x7ffff76254a0}, 8) = 0 brk(0xa9a000) = 0xa9a000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9b000) = 0xa9b000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9c000) = 0xa9c000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9d000) = 0xa9d000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9e000) = 0xa9e000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xa9f000) = 0xa9f000 brk(0xaa0000) = 0xaa0000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0xaa1000) = 0xaa1000 brk(0xaa2000) = 0xaa2000 (pattern of rt_sigprocmask, brk continues ...) rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0x1d5b000) = 0x1d5b000 brk(0x1d5c000) = 0x1d5c000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 brk(0x1d5d000) = 0x1d5d000 brk(0x1d5e000) = 0x1d5e000 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV (core dumped) +++ segmentation fault (core dumped) How can I debug this? Is this likely to be a bash problem? The error does not occur with another shell, such as eshell. I have also run an fschk, although I haven't been able to see the output because of this bug.

    Read the article

  • how to set filter for directories qt qfiledialog

    - by sneha
    hello everyone,i would like to know is there any way to select only some directories and some files at a time using qfiledialog class.here i set filesfilter..but i also need to set folder filter i mean i want only certain/some folders to be selected of name (i mean i need a filter for folders too.. coz i hav a common format for my application folders like example : name.abc,flight.abc etc all these are my folder names..i want to enable selection only for these type of folders which are named at last as .abc) i am using QStringList files = QFileDialog::getOpenFileNames(this, tr("Files & Directories"), QDir::currentPath(),tr(".doc.txt") ); tr(".doc.txt") is my files filter,same way i need folders filter with name only .abc at the end

    Read the article

  • Batch scripting for directories or better method

    - by baron
    Hi Everyone, Looking at creating a simple batch file for my app. My app needs some directories to be in place as it runs. The first method I thought was just make a batch script: @ECHO OFF IF NOT EXIST C:\App GOTO :CREATE ELSE GOTO :DONTCREATE :CREATE MKDIR C:\App\Code ECHO DIRECTORY CREATED :DONTCREATE ECHO IT WAS ALREADY THERE 1) This doesn't run as I would expect. Both :CREATE and :DONTCREATE seem to run regardless? How do I do an If properly then? Output: A subdirectory or file C:\App\Code already exists. DIRECTORY CREATED IT WAS ALREADY THERE So it enters both true and false statements? 2) The app is a C# WPF app. For what I am trying to do here (create a couple of directories if they don't already exist) - should I do it some other way? Perhaps in the application as it runs?

    Read the article

  • How to create nested directories in PhoneGap

    - by Stallman
    I have tried on this, but this didn't satisfy my request at all. I write a new one: var file_system; var fs_root; window.requestFileSystem(LocalFileSystem.PERSISTENT, 1024*1024, onInitFs, request_FS_fail); function onInitFs(fs) { file_system= fs; fs_root= file_system.root; alert("ini fs"); create_Directory(); alert("ini fs done."); } var string_array; var main_dir= "story_repository/"+ User_Editime; string_array= new Array("story_repository/",main_dir, main_dir+"/rec", main_dir+"/img","story_repository/"+ User_Name ); function create_Directory(){ var start= 0; var path=""; while(start < string_array.length) { path = string_array[start]; alert(start+" th created directory " +" is "+ path); fs_root.getDirectory ( path , {create: true, exclusive: false}, function(entry) { alert(path +"is created."); }, create_dir_err() ); start++; }//while loop }//create_Directory function create_dir_err() { alert("Recursively create directories error."); } function request_FS_fail() { alert("Failed to request File System "); } Although the directories are created, the it sends me ErrorCallback:"alert("Recursively create directories error.");" Firstly, I don't think this code will work since I have tried on this: This one failed: window.requestFileSystem( LocalFileSystem.PERSISTENT, 0, //request file system success callback. function(fileSys) { fileSys.root.getDirectory( "story_repository/"+ dir_name, {create: true, exclusive: false}, //Create directory story_repository/Stallman_time. function(directory) { alert("Create directory: "+ "story_repository/"+ dir_name); //create dir_name/img/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/img/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/img/"); //check. //create dir_name/rec/ fileSys.root.getDirectory { "story_repository/"+ dir_name + "/rec/", {create: true, exclusive: false}, function(directory) { alert("Create a directory: "+ "story_repository/"+ dir_name + "/rec/"); //check. //Go ahead. }, createError } //create dir_name/rec/ }, createError } //create dir_name/img }, createError); }, //Create directory story_repository/Stallman_time. createError()); } I just repeatedly call fs.root.getDirectory only but it failed. But the first one is almost the same... 1. What is the problem at all? Why does the first one always gives me the ErrorCallback? 2. Why can't the second one work? 3. Does anyone has a better solution?(no ErrorcallBack msg) ps: I work on Android and PhoneGap 1.7.0.

    Read the article

  • Excluding directories in Exuberant CTags

    - by DeepYellow
    I'm working with a very large code base and I find it useful to be selective about which directories are included for use with Exuberant Ctags. The --exclude option works well to eliminate individual file and directory names (with globing wildcards), but I can't figure out how to get it to exclude path patterns containing more than one directory. For example, I may want to exclude a directory tests, but only when processing thirdparty\tests (under Windows). The problem is if I just use --exclude=tests I exclude too many directories, including a test directory in the code I'm actively working on. Here are some things I've tried: --exclude=thirdparty\tests --exclude=thirdparty\\tests --exclude=*\thirdparty\tests --exclude=*\\thirdparty\\tests --exclude=thirdparty/tests Ctags silently ignores all these as evidenced by an examination of the tags file. How can I exclude a directory only when it is preceded by a given parent directory?

    Read the article

  • Junctions or Virtual Directories for Web Applications?

    - by Kevin
    I see that junctions are a common way of referencing shared code in many projects. However, I have not seen them used in web applications before. Our team is exploring the possibility of abandoning virtual directories in favor of junctions to simplify our build process. My goal is to compile a list of pros and cons in order to make an informed decision regarding this change. Is it more appropriate to use junctions or virtual directories on web application projects? Environment is ASP.NET, IIS6/IIS7, VS.NET.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >