Search Results

Search found 40168 results on 1607 pages for 'enumerate files folders'.

Page 29/1607 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Why am I getting 'File in use by Another User' and 'Application Sharing Violation' errors when trying to open & and save files?

    - by GollyJer
    We're getting this a lot lately. Windows 2008 Server Windows 7 & Vista Client PC's Microsoft Office 2007 When a user tries opening a file on our network drive (word doc, excel spreadsheet, etc) the software reports the file is locked by 'another user' even when it's not. They're also seeing random 'Sharing Violation' errors when trying to save files to the network. Possibly the same manifestation of the problem shows up when a user tries saving a local, on their own drive, non-network files and they get 'Can not save due to a Sharing Violation'.

    Read the article

  • Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively

    - by Gopinath
    Amazon S3 is an dead cheap cloud storage service that offers unlimited storage in pay as you use model. Recently we moved all the images and other static files(scripts & css) of Tech Dreams to Amazon S3 to reduce load on VPS server. Amazon S3 is cheap, but monthly bill will shoot up if images/static files of the blog are not cached properly (more details). By adding caching HTTP Headers Cache-Control or Expires to all the files hosted on Amazon S3 we reduced the monthly bills and also load time of blog pages. Lets see how to add custom headers to files stored on Amazon S3 service. Updating HTTP Headers of one file at a time The web interface of Amazon S3 Management console allows adding custom HTTP headers to one file at a time  through “Properties”  window (to access properties, right on a file and select Properties menu). So if you have to add headers to 100s of files then this is not the way to go! Updating HTTP Headers of multiple files of a folder recursively To update HTTP headers of multiple files in a folder recursively, we can use CloudBerry Explorer freeware or Bucket Explorer trail ware applications. CloudBerry is my favourite as it’s a freeware and also it’s has excellent interface to access Amazon S3 from desktops. Adding HTTP Headers with CloudBerry application is straight forward – right click on the required folders and choose the option “Set HTTP Headers”. Download CloudBerry Explorer This article titled,Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • Cannot play windows WMA lossless files on Rhythmbox

    - by sr71
    I have installed Ubuntu 10.10 with all its updates (without Windows) on it’s own drive and everything is working fine. I want to play WMA audio files, also mp3 files. The mp3 files play fine. The WMA files do not play. Used "Rhythmbox Music Player" with and without "Ubuntu-restricted" -extras. Still does not play the lossless windows audio files. I am frustrated with searching to play a WMA file ("download this converter"), but one cannot use this until one "deletes this". I have done everything but it still does not play my windows lossless files that I made from all my CD’s. I am looking for a music player that I can use to play mp3’s and WMA lossless music files and automatically put the album cover on and update the info if one exists. Installation should be as simple as possible. Right now I am back to the original virgin Ubuntu 10.10 with all the recent updates. This computer will do nothing but play music (mp3 and WMA) through a stereo system. I also use Internet to update album info for the music. I do not care what bells and whistles the music player program has, as long as it is an easy install and just plays my mp3 and wma lossless music files. Any help would be appreciated.

    Read the article

  • How to distinguish doc, ppt, xls files, without looking at file extension

    - by Shelby. S
    So I was wondering how would you differentiate ppt, xls and doc files from each other in linux regardless of extensions. I tried 'file' but from the looks of it, all of MSOffice files are categorized under the same file type. Similarly I'm having trouble with docx, xlsx and pptx files, since they're essentially all zip files containing a bunch of xml. I also tried a python script importing the magic module, but no go. I'm trying to identify the actual file for a sandbox analysis. And for this specific purpose I need to find the actual file type in order to run it in the sandbox vm (the Windows vm runs everything by extension). Let's say my sample file is labeled as try.exe, but in reality it's just a doc file. My script will rename it as try.exe.doc, which would work fine for doc files. But since linux identifies all MSOffice files as simple DOC files then there's no way to identify ppt or xls files. As a result the sandbox wont' analyze the sample correctly.

    Read the article

  • Hosting files with support for file tagging / keywords

    - by Zev Chonoles
    I have a large (approx. 25GB) collection of files I would like to host online for people to view or download. I have a spare computer I can use as a dedicated server for these files. I'm looking for a method of, or piece of software for, hosting my files where I can assign tags or keywords to the files, and people viewing my files online can search the collection via the tags. By way of approximate solutions I've found so far, I see that there is software such as Collectorz.com or Readerware for creating databases of one's books / music / movies, and these databases can be searched by tags or keywords, and the databases can be made available and searchable online; this would suit my purposes except that my files are not necessarily books, music, or movies, and I want the files themselves accessible online, not a database describing my files. A commercially-available solution like the ones above would be acceptable, but I'd prefer to have the whole setup under my control (i.e. I'd like to either implement it by hand, or use commercial software that doesn't rely on using the company's servers, paying them a continued fee, etc.). The current extent of my internet experience is designing a few Google Sites, so I know there's a fair chance I won't understand the answers I receive, but I'm always happy to have a summer project :)

    Read the article

  • IPS Facets and Info files

    - by mkupfer
    One of the unusual things about IPS is its "facet" feature. For example, if you're a developer using the foo library, you don't install a libfoo-dev package to get the header files. Intead, you install the libfoo package, and your facet.devel setting controls whether you get header files. I was reminded of this recently when I tried to look at some documentation for Emacs Org mode. I was surprised when Emacs's Info browser said it couldn't find the top-level Info directory. I poked around in /usr/share but couldn't find any info files. $ ls -l /usr/share/info ls: cannot access /usr/share/info: No such file or directory Was I was missing a package? $ pkg list -a | egrep "info|emacs" editor/gnu-emacs 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-gtk 23.1-0.175.0.0.0.2.537 i-- editor/gnu-emacs/gnu-emacs-lisp 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-no-x11 23.1-0.175.0.0.0.2.537 --- editor/gnu-emacs/gnu-emacs-x11 23.1-0.175.0.0.0.2.537 i-- system/data/terminfo 0.5.11-0.175.0.0.0.2.1 i-- system/data/terminfo/terminfo-core 0.5.11-0.175.0.0.0.2.1 i-- text/texinfo 4.7-0.175.0.0.0.2.537 i-- x11/diagnostic/x11-info-clients 7.6-0.175.0.0.0.0.1215 i-- $ Hmm. I didn't have the gnu-emacs-lisp package. That seemed an unlikely place to stick the Info files, and pkg(1) confirmed that the info files were not there: $ pkg contents -r gnu-emacs-lisp | grep info usr/share/emacs/23.1/lisp/info-look.el.gz usr/share/emacs/23.1/lisp/info-xref.el.gz usr/share/emacs/23.1/lisp/info.el.gz usr/share/emacs/23.1/lisp/informat.el.gz usr/share/emacs/23.1/lisp/org/org-info.el.gz usr/share/emacs/23.1/lisp/org/org-jsinfo.el.gz usr/share/emacs/23.1/lisp/pcvs-info.el.gz usr/share/emacs/23.1/lisp/textmodes/makeinfo.el.gz usr/share/emacs/23.1/lisp/textmodes/texinfo.el.gz $ Well, if I have what look like the right packages but don't have the right files, the next thing to check are the facets. The first check is whether there is a facet associated with the Info files: $ pkg contents -m gnu-emacs | grep usr/share/info dir facet.doc.info=true group=bin mode=0755 owner=root path=usr/share/info file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-1 [...] file [...] chash=[...] facet.doc.info=true group=bin mode=0444 owner=root path=usr/share/info/mh-e-2 [...] [...] Yes, they're associated with facet.doc.info. Now let's look at the facet settings on my desktop: $ pkg facet FACETS VALUE facet.locale.en* True facet.locale* False facet.doc.man True facet.doc* False $ Oops. I've got man pages and various English documentation files, but not the Info files. Let's fix that: # pkg change-facet facet.doc.info=True Packages to update: 970 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 970/970 181/181 9.2/9.2 PHASE ACTIONS Install Phase 226/226 PHASE ITEMS Image State Update Phase 2/2 PHASE ITEMS Reading Existing Index 8/8 Indexing Packages 970/970 # Now we have the info files: $ ls -F /usr/share/info a2ps.info dir@ flex.info groff-2 regex.info aalib.info dired-x flex.info-1 groff-3 remember ...

    Read the article

  • C# XmlSchemaSet - Resolving included schemas to enumerate attribute groups

    - by satixx
    Hi, I currently have a compiled XmlSchemaSet from which I get possible elements/attributes for each specific "parent" element defined in the schema. I have a single "master" xsd schema which includes another schema and uses attributeGroup references for some "common" elements. Here is a sample: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attributeGroup ref="BindingSourceAttributeGroup"/> </xs:complexType> </xs:element> </xs:schema> (SourceAttributeGroups.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="SourceAttributeGroups" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:attributeGroup id="BindingSourceAttributeGroup" name="BindingSourceAttributeGroup"> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </attributeGroup> </xs:schema> I would like to create an XmlSchemaSet in C# which would resolve, compile and "merge" every references of the MasterSchema so it would finaly look like this: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </xs:complexType> </xs:element> </xs:schema> This way, I could traverse each XmlSchemaParticle of the compiled schema to get every single attribute for a specific element, even when its attributes are defined in an external schema. At the moment, when I get the possible attributes for a "Binding" element, I only get the "Name" attribute since it is originally defined in the "master" schema. What would be the possible solutions to this problem? Thanks! Satixx

    Read the article

  • Clean conflicting class files from Temporary ASP.NET Files

    - by Deepfreezed
    Class file Conflicts in C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\ is preventing me from building the solution. Even though I try emptying out the folder, each time Visual Studio starts the build process, it brings in the class file in to the temp folder with the same folder name. If I restart the machine or leave it overnight, project build without error. Is there anyway to tell Visual studio to delete/ignore/clean any lingering class files that could be in the temp folder? Clean solution option in VS doesn't work either. Class file in conflict are from the App_Code folder.

    Read the article

  • Have powershell zip the contents of a bunch of folders, individual zip for each folder

    - by WebDevHobo
    Recently, I asked how to do this with a .bat file and an answer was provided. for /D %%d in (*.*) do "C:\Program Files\7-Zip\7z\7za.exe" a -tzip %%d.zip %%d However, this proved useful only for folders that have no spaces in their name. The reason being that batch will do the following: if the folder name is "jef's vacation pics", the variables will be: %%d = jef's %%e = vacation %%f = pics And then it tries to pass only %%d to the 7-zip program, which will not find such a folder and therefor will not create a zip file. I've tried looking up some tutorials, documentation sites and such, but I haven't been able to come up with an answer. There may be an answer, but I want to take this opportunity to try my hand at powershell. I was thinking that a function with 1 argument, that being the parent-folder of the sub-folders that need to be zipped, would be the best approach. So here's what I have, which doesn't work, probably due to my general in-experience with powershell: function zipFolders($parent) { $zip = "C:\Program Files\7-Zip\7z\7za.exe"; $parents | ForEach-Object $zip a -tzip }

    Read the article

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • Why are folders disappearing in Windows XP?

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the Windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and Windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

  • Outlook error-'Can't open e-mail folders.You must connect to Exchange w/ current profile before you can sync folders w/ your Outlook data file'

    - by Emilio
    Note that the error message I put in the title of this question is abbreviated. The actual error message is below. I have an Exchange account and using Outlook 2010 as the client. I run in Cached Exchange Mode and have an .OST file locally. Recently I uninstalled and reinstalled office. I set up a new mail profile when prompted by Outlook 2010 upon first execution of the program. In my initial attempt, I pointed the data file at my existing OST file. In my second attempt, I had Outlook create a fresh empty file. In both cases I'm getting the error 'Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost).', also shown in this screenshot -- http://drop.io/4rc9v9o/asset/outlook-error-png. I don't know how to connect with the current profile - that's what I thought I did when I created a new .OST file? I've had this problem for several days so my OST file is now out of date. Once I get things running I obviously want my active mailbox to update the OST, not the other way around.

    Read the article

  • Enumerate Class Properties in C#

    - by user275561
    I have a class Similar to this public class Model { public TimeSpan Time1 {get; set;} public TimeSpan Time2 { get; set; } public TimeSpan Time3 { get; set; } public TimeSpan Time4 { get; set; } } Now Let's Imagine I have to populate the times during runtime and then Figure out the time remaining between Time 1 and Time 2, then when that passes Find the time remaining between Time2 and Time3 and so on. However, I need to take into account what the time is right now. For Example: Now it is 1:00 PM Time1=5:00 AM Time 2 = 12:00 PM Time 3= 4:00 PM Time 4 = 6:00 PM So since the time is 1:00PM, I need to find the difference between Time 2 and Time 3 Now is there a smarter way other than reflection to determine this? Should i add something in my class

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • Trying to limit IMAP folders/mailboxes my iPhone/iPad sees

    - by QuantumMechanic
    (Note: I am using dovecot 1.0.10 on Ubuntu 8.04.4 LTS. Yes, I know I need to upgrade before next year :) (Note: The SMTP/IMAP server in question only serves my family, so there's only a very few users. Certainly what I propose below, even it it works, would be a logistical nightmare with any significant number of users). I have noticed (and have confirmed via google) that the iOS mail app is terrible in its handling of IMAP subscriptions, namespaces, etc. For example, my iPhone and iPad will see EVERYTHING (all mailboxes, folders, etc.), whereas clients like Thunderbird, alpine, etc. only see what I tell them to see. This makes it an incredible pain to move mail between mailboxes because I have to scroll through a gazillion things. The mail_location in dovecot.conf is: mail_location = mbox:%h/Mail/:INBOX=/var/mail/%u To get around this, I've been considering doing the following for user foo: Create a dovecot userdb with a foo-ios virtual user in it, whose UID is identical to that of the real (in /etc/passwd) foo user and with a homedir of /home/foo-ios. ln -s /var/mail/foo /var/mail/foo-ios mkdir -p /home/foo-ios/Mail cd /home/foo-ios/Mail ln -s /home/foo/Mail/mailbox-i-want-visible mailbox-i-want-visible Make symlinks for the rest of limited set of mailboxes/folders I want visible to the iOS mail app. chown -R foo:foo /home/foo-ios Change iOS mail app settings to log in as user foo-ios instead of user foo. Will this work or will there be some index/file corruption hell because there will be two sets of indexes (one set living in /home/foo/Mail/.imap and other set living in /home/foo-ios/Mail/.imap) indexing the same underlying mbox files? And I'd be more than happy to hear of a better way to do this with dovecot! (Or to hear that dovecot 2.x works better with iOS devices).

    Read the article

  • Is it possible to exclude some files from checkin (TFS) ?

    - by Thomas Wanner
    We use configuration files within various projects under source control (TFS), where each developer has to make some adjustments in his local copy to configure his environment. The build process takes care about replacing the config files with the server configuration as a part of the deployment, so it doesn't actually matter what is in the repository. However, we would anyway like to keep some kind of a default non-breaking version of config files in the repository, so that e.g. people not involved in the particular project won't run into troubles because of local misconfiguration. We tried to resolve this by introducing the check-in policy that simply forbids to check-in the config files. This works fine, but just because we're lazy to always uncheck those checkboxes in the pending changes window, the question comes : is it possible to transparently disable the check-in of particular files without keeping them out of source control (e.g. locking their current version) ?

    Read the article

  • Objective C Enumerate Sentences in a paragraph

    - by Faz Ya
    I would like to write an enumerator that would go through a paragraph of text and gives me one sentence at a time. I tried using stringEnumerate with the NSStringEnumerationBySentences but that simply looks at the periods and fails. For example, lets say I have the following text Block: "Senator John A. Boehner decided not to move forward. He also decided not to call the congress. The news reporter said though...." I would like my function to break down the above paragraph in the following sentences: Senator John A. Boehner decided not to move forward He also decided not to call the congress (No third sentence because it's a half a sentence) The String Enumerator with the sentence optionjust looks at the periods and breaks down that way which is wrong: Senator John A. Boehner decided not to move forward He also decided not to call the congress The news reporter said though.... Is there any library or function that I can call that does a better job at this? Thanks - (NSMutableString *) getOnlyFullSentencesFromTextBlock:(NSMutableString *) textBlock{ [textBlock enumerateSubstringsInRange:NSMakeRange(0, [textBlock length]) options:NSStringEnumerationBySentences | NSStringEnumerationLocalized usingBlock:^(NSString *substring, NSRange substringRange, NSRange enclosingRange, BOOL *stop) { NSLog(@"Sentence Frag:%@", substring); }]; return textBlock; }

    Read the article

  • Formatting a query to enumerate through 2 different datatables

    - by boiler1974
    I have 2 datatables sendTable and recvTable They both have identical column names and numbers of columns "NODE" "DSP Name" "BUS" "IDENT" "STATION" "REF1" "REF2" "REF3" "REF4" "REF5" "REF6" "REF7" "REF8" I need to compare these 2 tables and separate out the mismatches I only need to check Columns 3-11 and Ignore col 1 and 2 I tried at first removing the 2 columns and then loop thru row by row and return matches and mismatches but the problem with this approach is that I no longer have the "NODE" and "DSP Name" associated with the row when I finalize my results So I need help with a query Here is my attempt var samerecordQuery = from r1 in sendTable.AsEnumerable() where r1.Field<int>("BUS").Equals(from r2 in recvTable.AsEnumerable() where r2.Field<int>("BUS")) this obviously doesn't work so how do I format the query to say from r1 cols[3-11] equals r2 cols [3-11] and once I have this I can use the except to pull out the mismatches

    Read the article

  • Windows XP Disappearing Folders

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

  • How to convert Xml files to Text Files 2

    - by John
    Hi all, I have around 8000 xml files that needs to be converted into text files. The text file must contain title, description and keywords of the xml file without the tags and removing other elements and attributes as well. In other words, i need to create 8000 text files containing the title,description and keywords of the xml file. I need codings for this to be done systematically. Any help would be greatly appreciated. Thanks in advance. Hey all thank you all so so much with your replies. Here's a sample of what my xml looks like: <?xml version="1.0"?> <metadata> <identifier>43productionsNightatthegraveyard</identifier> <title>Night at the graveyard</title> <collection>opensource_movies</collection> <mediatype>movies</mediatype> <resource>movies</resource> <upload_application appid="ccPublisher" version="2.2.1"/> <uploader>[email protected]</uploader> <description>una noche en el cementerio (terror)</description> <license>http://creativecommons.org/licenses/by-nc/3.0/</license> <title>Night at the graveyard</title> <format>Video</format> <adder>[email protected]</adder> <licenseurl>http://creativecommons.org/licenses/by-nc/3.0/</licenseurl> <year>2007</year> <keywords>Night,at,the,graveyard,43,productions</keywords> <holder>43 productions</holder> <publicdate>2007-04-11 19:52:28</publicdate> </metadata> And this would be the output: una noche en el cementerio (terror) Night at the graveyard Night,at,the,graveyard,43,productions This need to be saved with the same name but in text format. Thanks all so much if any more suggestions would be much appreciated.

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >