Search Results

Search found 45987 results on 1840 pages for 'copy files'.

Page 601/1840 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Importing Data From Excel Using SSIS - Part 1

    Recently while working on a project to import data from an Excel worksheet using SSIS, I realized that sometimes the SSIS Package failed even though when there were no changes in the structure/schema of the Excel worksheet. I investigated it and I noticed that the SSIS Package succeeded for some set of files, but for others it failed. I found that the structure/schema of the worksheet from both these sets of Excel files were the same, the data was the only difference. How come just changing the data can make an SSIS Package fail? What actually causes this failure? What can we do to fix it?

    Read the article

  • how to remove extra whitespace or a paragraph tag generated by ckeditor?

    - by user66862
    I am using CK editor to post the content. When I just type a simple text manually, It just appears as it was.But the problem is when I copy the test from other websites like wikipedia, an empty paragraph is rendered in the beginning of the string, which is making my page look odd. I have tried using trim,and preg_replace. But I did not find any solution. Please suggest some method to overcome this issue.. Thanks

    Read the article

  • Terminal echo issue

    - by user107602
    I've been using Ubuntu 10.04 LTS for a while, am quite new, using the terminal, made a script to open a project of mine containing multiple files with gedit - after execution of the respective script - gedit [filename1] [filename2] ... , terminal executes it successfully, gedit opens passed files and terminal is ready for another line. Well, today I came across a strange issue - after the execution of the above mentioned script, gedit initiates successfully, but terminal denies execution of commands and echoes all keyboard events, even specific ctrl+... functions - all until gedit is closed. I can't figure what caused this as my recent activity was focused around a C project, not regarding the terminal in any way. I recall being able to execute another line after initiating e.g. open gedit and compile a project within a single tab and session of a terminal window. Any help would be appreciated! Regards!

    Read the article

  • Find hosting through Bizspark program

    - by user636525
    I am a Bizspark owner and I am all set to launch my website but I am really confused about where to start the process for finding a hosting partner with my Bizspark membership. Since I have downloaded MicroSoft SQL Server from Bizspark website, I know I don't need a license for SQL Server. So do I need to email some hosting companies and ask them if I can get their VPS and I can install my own copy of SQL Server? That means, I will end up paying only for the VPS?

    Read the article

  • llustrated C# 2010 - ISBN 978-1-4302-3282-7 Further comments

    - by TATWORTH
    I am now halfway through readign this books and I continue to be favourably impressed by it. It includes many of the new features of C# 4.0, including one that was new to me. This is a very good C# language manual and I have no hesitation in recommending it both to individuals and to C# development teams.  It is good for those learning C# and for those with years of experiance of C#. You can buy a copy online at http://www.apress.com/book/view/143023282X

    Read the article

  • What is a good scripting language for FTP downloading

    - by WxPilot
    I need to develop some scripts for downloading files from a remote server to our file system. We need to grab files that have various timestamps, and place them into appropriate folders on our system. I'm probably opening quite the can of worms here, but which scripting language would be the best for getting this done? Right now I am using C Shell, which is working fine, but it is requiring multiple scripts because of the use of the ftp command. I wanted to know if there is a better option before I go too far into this project. There are no security concerns within the scripts as far as FTP accounts (public access FTP)

    Read the article

  • Market Yourself As an Online Copywriter

    Advertising and Copy writing is quite well link to each other as both stand similarly to the same meaning. Most of us know what is advertising, which is to promote or create awareness about something to certain community or to the public. Advertising can be exercised either by online or offline.

    Read the article

  • Partition Hard Drive For Data

    - by user211779
    Greetings ~ I am a new Linux/Ubuntu user. For various reasons (mostly my own ignorance) I am on my third install of Ubuntu 12.04. I want to partition the hard drive to create a drive for data and personal files in case I ever have to install again. I have been struggling all afternoon to make a gparted live USB. Tuxboot looked like the answer but I get an error message when using it. So, I am asking for help. Ultimately, I want to partition the hard drive for data and personal files. What do you recommend?

    Read the article

  • PDF Merge

    This Windows application lets you merge image and PDF files in a given folder into one PDF file.

    Read the article

  • send multiple file over TCP with C# using TcpClient

    - by xnoor
    I'm trying to send multiple files over TCP using C# TcpClient, for a single file it works great, but when I have multiple files, it sends only the first one. Here is my code: SENDING FILES try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, 12000); networkStream = tcpClient.GetStream(); byte[] byteSend = new byte[tcpClient.ReceiveBufferSize]; string startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); DirectoryInfo directoriesInfo = new DirectoryInfo(startupPath); DirectoryInfo[] directories = directoriesInfo.GetDirectories(); FileInfo[] files = directoriesInfo.GetFiles(); for (int iLoop = 0; iLoop < directories.Length; iLoop++) { FileInfo[] subdirectoryFiles = directories[iLoop].GetFiles(); foreach (FileInfo fi in subdirectoryFiles) { fileStream = new FileStream(fi.FullName, FileMode.Open, FileAccess.Read); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; fileUpdateMessage.targetDirectory = fi.Directory.Name; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); } } tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { //logger.Info(ex.Message); return false; } finally { } RECEIVING FILES try { TcpClient tcpClient = c as TcpClient; NetworkStream networkstream = tcpClient.GetStream(); FileStream fileStream = null; byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.FileProperties: FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, fileUpdateMessage) as FileUpdateMessage; string startupPath = @"d:updatefolder";//System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); DirectoryInfo mainDirectory = new DirectoryInfo(startupPath); DirectoryInfo targetDirecotry = new DirectoryInfo(startupPath + "\\" + fileUpdateMessage.targetDirectory); if (!targetDirecotry.Exists) { mainDirectory.CreateSubdirectory(fileUpdateMessage.targetDirectory); } fileStream = new FileStream(startupPath + "\\" + fileUpdateMessage.targetDirectory + "\\" + fileUpdateMessage.fileName, FileMode.Create, FileAccess.Write, FileShare.ReadWrite); long filezie = fileUpdateMessage.fileSize; int byteSize = 0; byte[] downBuffer = new byte[2048]; while ((byteSize = networkstream.Read(downBuffer, 0, downBuffer.Length)) > 0) { fileStream.Write(downBuffer, 0, byteSize); if (this.InvokeRequired) { this.Invoke((MethodInvoker)delegate { //progressBar1.Value = Convert.ToInt32((byteSize * 100) / fileUpdateMessage.fileSize); progressBar1.Value = Convert.ToInt32((fileStream.Length * 100) / fileUpdateMessage.fileSize); lblFileName.Text = fileUpdateMessage.fileName; }); } else { //progressBar1.Value = Convert.ToInt32((byteSize * 100) / fileUpdateMessage.fileSize); lblFileName.Text = fileUpdateMessage.fileName; } } fileStream.Close(); networkstream.Close(); break; } } catch (Exception ex) { //logger.Error(ex.Message); } Any idea what I am doing wrong?

    Read the article

  • wpf window refresh works at first, then stops

    - by mcl
    I've got a routine that grabs a list of all images in a directory, then runs an MD5 digest on all of them. Since this takes a while to do, I pop up a window with a progress bar. The progress bar is updated by a lambda that I pass in to the long-running routine. The first problem was that the progress window was never updated (which is normal in WPF I guess). Since WPF lacks a Refresh() command I fixed this with a call to Dispatcher.Invoke(). Now the progress bar is updated for a while, then the window stops being updated. The long-running work does eventually finish and the windows go back to normal. I have already tried a BackgroundWorker and quickly became frustrated by a threading issue related to an event triggered by the long-running process. So if that's really the best solution and I just need to learn the paradigm better, please say so. But I'd be really much happier with the approach I've got here, except that it stops updating after a bit (for example, in a folder with 1000 files, it might update for 50-100 files, then "hang"). The UI does not need to be responsive during this activity, except to report on progress. Anyway, here's the code. First the progress window itself: public partial class ProgressWindow : Window { public ProgressWindow(string title, string supertext, string subtext) { InitializeComponent(); this.Title = title; this.SuperText.Text = supertext; this.SubText.Text = subtext; } internal void UpdateProgress(int count, int total) { this.ProgressBar.Maximum = Convert.ToDouble(total); this.ProgressBar.Value = Convert.ToDouble(count); this.SubText.Text = String.Format("{0} of {1} finished", count, total); this.Dispatcher.Invoke(DispatcherPriority.Render, EmptyDelegate); } private static Action EmptyDelegate = delegate() { }; } <Window x:Class="Pixort.ProgressWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Pixort Progress" Height="128" Width="256" WindowStartupLocation="CenterOwner" WindowStyle="SingleBorderWindow" ResizeMode="NoResize"> <DockPanel> <TextBlock DockPanel.Dock="Top" x:Name="SuperText" TextAlignment="Left" Padding="6"></TextBlock> <TextBlock DockPanel.Dock="Bottom" x:Name="SubText" TextAlignment="Right" Padding="6"></TextBlock> <ProgressBar x:Name="ProgressBar" Height="24" Margin="6"/> </DockPanel> </Window> The long running method (in Gallery.cs): public void ImportFolder(string folderPath, Action<int, int> progressUpdate) { string[] files = this.FileIO.GetFiles(folderPath); for (int i = 0; i < files.Length; i++) { // do stuff with the file if (null != progressUpdate) { progressUpdate.Invoke(i + 1, files.Length); } } } Which is called thusly: ProgressWindow progress = new ProgressWindow("Import Folder Progress", String.Format("Importing {0}", folder), String.Empty); progress.Show(); this.Gallery.ImportFolder(folder, ((c, t) => progress.UpdateProgress(c, t))); progress.Close();

    Read the article

  • Xcode newb -- #include can't find my file

    - by morgancodes
    I'm trying to get a third party audio library (STK) working inside Xcode. Along with the standard .h files, many of the implementation files include a file called SKINI.msg. SKINI.msg is in the same directory as all of the header files. The header files are getting included fine, but the compiler complains that it can't find SKINI.msg. What do I need to do to get Xcode to happily include SKINI.msg? Edit: Here's the contents of SKINI.msg: /*********************************************************/ /* Definition of SKINI Message Types and Special Symbols Synthesis toolKit Instrument Network Interface These symbols should have the form: \c __SK_<name>_ where <name> is the string used in the SKINI stream. by Perry R. Cook, 1995 - 2010. */ /*********************************************************/ namespace stk { #define NOPE -32767 #define YEP 1 #define SK_DBL -32766 #define SK_INT -32765 #define SK_STR -32764 #define __SK_Exit_ 999 /***** MIDI COMPATIBLE MESSAGES *****/ /*** (Status bytes for channel=0) ***/ #define __SK_NoteOff_ 128 #define __SK_NoteOn_ 144 #define __SK_PolyPressure_ 160 #define __SK_ControlChange_ 176 #define __SK_ProgramChange_ 192 #define __SK_AfterTouch_ 208 #define __SK_ChannelPressure_ __SK_AfterTouch_ #define __SK_PitchWheel_ 224 #define __SK_PitchBend_ __SK_PitchWheel_ #define __SK_PitchChange_ 49 #define __SK_Clock_ 248 #define __SK_SongStart_ 250 #define __SK_Continue_ 251 #define __SK_SongStop_ 252 #define __SK_ActiveSensing_ 254 #define __SK_SystemReset_ 255 #define __SK_Volume_ 7 #define __SK_ModWheel_ 1 #define __SK_Modulation_ __SK_ModWheel_ #define __SK_Breath_ 2 #define __SK_FootControl_ 4 #define __SK_Portamento_ 65 #define __SK_Balance_ 8 #define __SK_Pan_ 10 #define __SK_Sustain_ 64 #define __SK_Damper_ __SK_Sustain_ #define __SK_Expression_ 11 #define __SK_AfterTouch_Cont_ 128 #define __SK_ModFrequency_ __SK_Expression_ #define __SK_ProphesyRibbon_ 16 #define __SK_ProphesyWheelUp_ 2 #define __SK_ProphesyWheelDown_ 3 #define __SK_ProphesyPedal_ 18 #define __SK_ProphesyKnob1_ 21 #define __SK_ProphesyKnob2_ 22 /*** Instrument Family Specific ***/ #define __SK_NoiseLevel_ __SK_FootControl_ #define __SK_PickPosition_ __SK_FootControl_ #define __SK_StringDamping_ __SK_Expression_ #define __SK_StringDetune_ __SK_ModWheel_ #define __SK_BodySize_ __SK_Breath_ #define __SK_BowPressure_ __SK_Breath_ #define __SK_BowPosition_ __SK_PickPosition_ #define __SK_BowBeta_ __SK_BowPosition_ #define __SK_ReedStiffness_ __SK_Breath_ #define __SK_ReedRestPos_ __SK_FootControl_ #define __SK_FluteEmbouchure_ __SK_Breath_ #define __SK_JetDelay_ __SK_FluteEmbouchure_ #define __SK_LipTension_ __SK_Breath_ #define __SK_SlideLength_ __SK_FootControl_ #define __SK_StrikePosition_ __SK_PickPosition_ #define __SK_StickHardness_ __SK_Breath_ #define __SK_TrillDepth_ 1051 #define __SK_TrillSpeed_ 1052 #define __SK_StrumSpeed_ __SK_TrillSpeed_ #define __SK_RollSpeed_ __SK_TrillSpeed_ #define __SK_FilterQ_ __SK_Breath_ #define __SK_FilterFreq_ 1062 #define __SK_FilterSweepRate_ __SK_FootControl_ #define __SK_ShakerInst_ 1071 #define __SK_ShakerEnergy_ __SK_Breath_ #define __SK_ShakerDamping_ __SK_ModFrequency_ #define __SK_ShakerNumObjects_ __SK_FootControl_ #define __SK_Strumming_ 1090 #define __SK_NotStrumming_ 1091 #define __SK_Trilling_ 1092 #define __SK_NotTrilling_ 1093 #define __SK_Rolling_ __SK_Strumming_ #define __SK_NotRolling_ __SK_NotStrumming_ #define __SK_PlayerSkill_ 2001 #define __SK_Chord_ 2002 #define __SK_ChordOff_ 2003 #define __SK_SINGER_FilePath_ 3000 #define __SK_SINGER_Frequency_ 3001 #define __SK_SINGER_NoteName_ 3002 #define __SK_SINGER_Shape_ 3003 #define __SK_SINGER_Glot_ 3004 #define __SK_SINGER_VoicedUnVoiced_ 3005 #define __SK_SINGER_Synthesize_ 3006 #define __SK_SINGER_Silence_ 3007 #define __SK_SINGER_VibratoAmt_ __SK_ModWheel_ #define __SK_SINGER_RndVibAmt_ 3008 #define __SK_SINGER_VibFreq_ __SK_Expression_ } // stk namespace And here's what the compiler said: CompileC build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/Objects-normal/i386/BandedWG.o "../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp" normal i386 c++ com.apple.compilers.gcc.4_2 cd /Users/morganpackard/Desktop/trashme/StkCompile setenv LANG en_US.US-ASCII setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -x c++ -arch i386 -fmessage-length=0 -pipe -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -D__IPHONE_OS_VERSION_MIN_REQUIRED=30000 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.2.sdk -fvisibility=hidden -fvisibility-inlines-hidden -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-generated-files.hmap -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-own-target-headers.hmap -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-all-target-headers.hmap -iquote /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/StkCompile-project-headers.hmap -F/Users/morganpackard/Desktop/trashme/StkCompile/build/Debug-iphonesimulator -I/Users/morganpackard/Desktop/trashme/StkCompile/build/Debug-iphonesimulator/include -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/DerivedSources/i386 -I/Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/DerivedSources -include /var/folders/dx/dxSUSyOJFv0MBEh9qC1oJ++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/StkCompile_Prefix-bopqzvwpuyqltrdumgtjtfrjvtzb/StkCompile_Prefix.pch -c "/Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp" -o /Users/morganpackard/Desktop/trashme/StkCompile/build/StkCompile.build/Debug-iphonesimulator/StkCompile.build/Objects-normal/i386/BandedWG.o /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:33:21: error: SKINI.msg: No such file or directory /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp: In member function 'virtual void stk::BandedWG::controlChange(int, stk::StkFloat)': /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:326: error: '__SK_BowPressure_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:342: error: '__SK_AfterTouch_Cont_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:349: error: '__SK_ModWheel_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:357: error: '__SK_ModFrequency_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:359: error: '__SK_Sustain_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:363: error: '__SK_Portamento_' was not declared in this scope /Users/morganpackard/Desktop/trashme/StkCompile/../../../Data/study/iPhone class/stk-4.4.2/src/BandedWG.cpp:367: error: '__SK_ProphesyRibbon_' was not declared in this scope

    Read the article

  • Python dictionary key missing

    - by Greg K
    I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it. I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would. I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary. for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules Error encountered: $ css-combine combined.css test1.css test2.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: 'p' Swap the inputs: $ css-combine combined.css test2.css test1.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: '#header_.title' I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary. The script: #!/usr/bin/env python import optparse import re class CssParser: def __init__(self): self.file = False self.order = [] # store rules assignment order def parse(self, rules = {}): if self.file == False: raise IOError("No file to parse") selector = False for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules def hasSelector(self, line): return True if re.search("^([#a-z,\.:\s]+){", line) else False def getSelector(self, line): s = re.search("^([#a-z,:\.\s]+){", line).group(1) return "_".join(s.strip().split()) def hasProperty(self, line): return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False def getProperty(self, line): return re.search("([a-z-]+:[^;]+;)", line).group(1) class Consolidator: """Class to consolidate CSS rule attributes""" def run(self, outfile, files): parser = CssParser() rules = {} for file in files: try: parser.file = open(file) rules = parser.parse(rules) except IOError: print "Cannot read file: " + file finally: parser.file.close() self.serialize( [(selector, rules[selector]) for selector in parser.order], outfile ) def serialize(self, rules, outfile): try: f = open(outfile, "w") for rule in rules: f.write( "%s {\n\t%s\n}\n\n" % ( " ".join(rule[0].split("_")), "\n\t".join(rule[1]) ) ) except IOError: print "Cannot write output to: " + outfile finally: f.close() def init(): op = optparse.OptionParser( usage="Usage: %prog [options] <output file> <stylesheet1> " + "<stylesheet2> ... <stylesheetN>", description="Combine CSS rules spread across multiple " + "stylesheets into a single file" ) opts, args = op.parse_args() if len(args) < 3: if len(args) == 1: print "Error: No input files specified.\n" elif len(args) == 2: print "Error: One input file specified, nothing to combine.\n" op.print_help(); exit(-1) return [opts, args] if __name__ == '__main__': opts, args = init() outfile, stylesheets = [args[0], args[1:]] c = Consolidator() c.run(outfile, stylesheets) Test CSS file 1: body { background-color: #e7e7e7; } p { margin: 1em 0em; } File 2: body { font-size: 16px; } #header .title { font-family: Tahoma, Geneva, sans-serif; font-size: 1.9em; } #header .title a, #header .title a:hover { color: #f5f5f5; border-bottom: none; text-shadow: 2px 2px 3px rgba(0, 0, 0, 1); } Thanks in advance.

    Read the article

  • How do I get MSDeploy to skip specific folders and file types in folders as CCNet task

    - by Simon Martin
    I want MSDeploy to skip specific folders and file types within other folders when using sync. Currently I'm using CCNet to call MSDeploy with the sync verb to take websites from a build to a staging server. Because there are files on the destination that are created by the application / user uploaded files etc, I need to exclude specific folders from being deleted on the destination. Also there are manifest files created by the site that need to remain on the destination. At the moment I've used -enableRule:DoNotDeleteRule but that leaves stale files on the destination. <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:iisApp="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:iisApp="$(website)/$(websiteFolder)" -enableRule:DoNotDeleteRule</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> I have tried to use the skip operation but run into problems. Initially I dropped the DoNotDeleteRule and replaced it with (multiple) skip <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:iisApp="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:iisApp="$(website)/$(websiteFolder)" -skip:objectName=dirPath,absolutePath="assets" -skip:objectName=dirPath,absolutePath="survey" -skip:objectName=dirPath,absolutePath="completion/custom/complete*.aspx" -skip:objectName=dirPath,absolutePath="completion/custom/surveylist*.manifest" -skip:objectName=dirPath,absolutePath="content/scorecardsupport" -skip:objectName=dirPath,absolutePath="Desktop/docs" -skip:objectName=dirPath,absolutePath="_TempImageFiles"</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> But this results in the following: Error: Source (iisApp) and destination (contentPath) are not compatible for the given operation. Error count: 1. So I changed from iisApp to contentPath and instead of dirPath,absolutePath just Directory like this: <exec> <executable>$(MsDeploy)</executable> <baseDirectory>$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\</baseDirectory> <buildArgs>-verb:sync -source:contentPath="$(ProjectsDirectory)$(projectName)$(ProjectsWorkingDirectory)\Website\" -dest:contentPath="$(website)/$(websiteFolder)" -skip:Directory="assets" -skip:Directory="survey" -skip:Directory="content/scorecardsupport" -skip:Directory="Desktop/docs" -skip:Directory="_TempImageFiles"</buildArgs> <buildTimeoutSeconds>600</buildTimeoutSeconds> <successExitCodes>0,1,2</successExitCodes> </exec> and this gives me an error: Illegal characters in path: < buildresults Info: Adding MSDeploy.contentPath (MSDeploy.contentPath). Info: Adding contentPath (C:\WWWRoot\MySite -skip:Directory=assets -skip:Directory=survey -skip:Directory=content/scorecardsupport -skip:Directory=Desktop/docs -skip:Directory=_TempImageFiles). Info: Adding dirPath (C:\WWWRoot\MySite -skip:Directory=assets -skip:Directory=survey -skip:Directory=content/scorecardsupport -skip:Directory=Desktop/docs -skip:Directory=_TempImageFiles). < /buildresults < buildresults Error: Illegal characters in path. Error count: 1. < /buildresults So I need to know how to configure this task so the folders referenced do not have their contents deleted in a sync and that that *.manifest and *.aspx files in the completion/custom folders are also skipped.

    Read the article

  • PHP: How do I loop through every XML file in a directory?

    - by celebritarian
    Hi! I'm building a simple application. It's a user interface to an online order system. Basically, the system is going to work like this: Other companies upload their purchase orders to our FTP server. These orders are simple XML files (containing things like customer data, address information, ordered products and the quantities…) I've built a simple user interface in HTML5, jQuery and CSS — all powered by PHP. PHP reads the content of an order (using the built-in features of SimpleXML) and displays it on the web page. So, it's a web app, supposed to always be running in a browser at the office. The PHP app will display the content of all orders. Every fifteen minutes or so, the app will check for new orders. How do I loop through all XML files in a directory? Right now, my app is able to read the content of a single XML file, and display it in a nice way on the page. My current code looks like this: // pick a random order that I know exists in the Order directory: $xml_file = file_get_contents("Order/6366246.xml",FILE_TEXT); $xml = new SimpleXMLElement($xml_file); // start echo basic order information, like order number: echo $xml->OrderHead->ShopPO; // more information about the order and the customer goes here… echo "<ul>"; // loop through each order line, and echo all quantities and products: foreach ($xml->OrderLines->OrderLine as $orderline) { echo "<tr>\n". "<li>".$orderline->Quantity." st.</li>\n". "<li>".$orderline->SKU."</li>\n"; } echo "</ul>"; // more information about delivery options, address information etc. goes here… So, that's my code. Pretty simple. It only needs to do one thing — print out the content of all order files on the screen — so me and my colleagues can see the order, confirm it and deliver it. That's it. But right now — as you can see — I'm selecting one single order at a time, located in the Order directory. But how do I loop through the entire Order directory, and read aand display the content of each order (like above)? I'm stuck. I don't really know how you get all (xml) files in a directory and then do something with the files (like reading them and echo out the data, like I want to). -- I'd really appreciate some help. I'm not very experienced with PHP/server-side programming, so if you could help me out here I'd be very grateful. Thanks a lot in advance! // Björn (celebritarian at me dot com)

    Read the article

  • C# SQLite file import prevent duplicates

    - by jakesankey
    Hi, I am attempting to get a directory (which is ever-growing) full of .txt comma delimited files to import into my SQLite db. I now have all of the files importing ok, however I need to have some way of excluding the files that have been previously added to db. I have a column in the db called FileName where the name and extension are stored next to each record from each file. Now I need to say 'If the code finds XXX.txt and XXX.txt is already in db, then skip this file'. Can I somehow add this logic to the getfiles command or is there another easy way? using (SQLiteCommand insertCommand = con.CreateCommand()) { SQLiteCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); foreach (string file in files) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" SELECT COUNT(*) FROM Import WHERE FileName = @FileExt;"; cmdd.Parameters.Add(new SQLiteParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); //int count = ((IConvertible)insertCommand.ExecuteScalar().ToInt32(null)); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string FileNameExt = Path.GetFileName(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SQLiteParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SQLiteParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SQLiteParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SQLiteParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); }

    Read the article

  • NetBeans 7.2 MinGW installing for OpenCV

    - by Gligorijevic
    i have installed minGW on my PC according to http://netbeans.org/community/releases/72/cpp-setup-instructions.html, and i have "restored defaults" using NetBeans 7.2 who has found all necessary files. But when I made test sample C++ app i got following error: c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -ladvapi32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lshell32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -luser32 c:/mingw/bin/../lib/gcc/mingw32/4.6.2/../../../../mingw32/bin/ld.exe: cannot find -lkernel32 collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/welcome_1.exe] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 Can anyone give me a hand with installing openCV and minGW for NetBeans? generated Makefiles file goes like this: > # CMAKE generated file: DO NOT EDIT! > # Generated by "MinGW Makefiles" Generator, CMake Version 2.8 > > # Default target executed when no arguments are given to make. default_target: all .PHONY : default_target > > #============================================================================= > # Special targets provided by cmake. > > # Disable implicit rules so canonical targets will work. .SUFFIXES: > > # Remove some rules from gmake that .SUFFIXES does not remove. SUFFIXES = > > .SUFFIXES: .hpux_make_needs_suffix_list > > # Suppress display of executed commands. $(VERBOSE).SILENT: > > # A target that is always out of date. cmake_force: .PHONY : cmake_force > > #============================================================================= > # Set environment variables for the build. > > SHELL = cmd.exe > > # The CMake executable. CMAKE_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" > > # The command to remove a file. RM = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake.exe" -E remove -f > > # Escaping for special characters. EQUALS = = > > # The program to use to edit the cache. CMAKE_EDIT_COMMAND = "C:\Program Files (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" > > # The top-level source directory on which CMake was run. CMAKE_SOURCE_DIR = C:\msys\1.0\src\opencv > > # The top-level build directory on which CMake was run. CMAKE_BINARY_DIR = C:\msys\1.0\src\opencv\build\mingw > > #============================================================================= > # Targets provided globally by CMake. > > # Special rule for the target edit_cache edit_cache: @$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan > "Running CMake cache editor..." "C:\Program Files > (x86)\cmake-2.8.9-win32-x86\bin\cmake-gui.exe" -H$(CMAKE_SOURCE_DIR) > -B$(CMAKE_BINARY_DIR) .PHONY : edit_cache > > # Special rule for the target edit_cache edit_cache/fast: edit_cache .PHONY : edit_cache/fast

    Read the article

  • Recursing data into a 2 dimensional array in PHP 5

    - by user315699
    I'm getting bamboozled by "for each" loops and two dimensional arrays, and I'm a php newb so please bear with me (and ignore any variables with the word "image" - it's all about the mp3s, I just didn't change it from the xml tutorial) I found a php function on the net that list files in a directory, the output of which is: Array ( [0] = audio/1.mp3 [1] = audio/2.mp3 [2] = audio/3.mp3 [3] = audio/4.mp3 [4] = audio/5.mp3 ) As expected. And another that lists some info about mp3 files. $mp3datafile = 'audio/1.mp3'; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); The output of which is Array ( [Filesize] = 31972 [Encoding] = CBR [etc] ) In order to automatically build RSS for a podcast, I need to generate XML for each item. So far so good. This is how I'm making the xml foreach ($imagearray as $key = $val) { $tnsoundfile = $xml_generator-addChild('item'); $tnsoundfile-addChild('title', $podcasttitle); $enclosure = $tnsoundfile-addChild('enclosure'); $enclosure-addAttribute('url', $val); // that's the filename $enclosure-addAttribute('length', $mp3dataArray[Filesize]); // << Length is file length, not time. But later I also need $mp3dataArray[Length mm:ss] for duration tag. $enclosure-addAttribute('type', 'audio/mpeg'); $tnsoundfile-addChild('guid', $path_to_image_dir.'/'.$val); } (The above has been truncated, I realise it's not proper xml right now, but it was just to show what was going on). Perfect. But I need to do it for as many files as there are in the directory. So, I have an array of the names of the files in the directory in $mp3data And, I have an array of mp3 data in $mp3dataArray from one iteration of the get_metadata() function. If I do the following, then I get a nice list of the mp3 data of the 5 files in the directory: foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); print_r($mp3dataArray); unset($mp3dataArray); } As expected. Where I'm struggling, and have been for most of the day in spite of reading many forums and tutorials, is how to populate the "second dimension" of the array, so that it goes through 1,2,3,4 and 5.mp3 (or however many there are), extracts the metadata, then allows me to use it in the xml section above. Here's what I have foreach ($mp3data as $key = $val) { $mp3datafile = $val; $m = new mp3file($mp3datafile); $mp3dataArray = $m-get_metadata(); $mp3testarray = array($mp3dataArray); } print_r($mp3dataArray); Shouldn't that line print_r($mp3dataArray); give me a nice list of 5 lots of mp3 data, in the way it did when I recursed through the loop as before? Cos this is driving me nuts! It must be something so simple, and any help would be greatly appreciated. Thank you in advance.

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • How can i get more than one jpg. or txt file from any folder?

    - by Phsika
    Dear Sirs; i have two Application to listen network Stream : Server.cs on the other hand; send file Client.cs. But i want to send more files on a stream from any folder. For example. i have C:/folder whish has got 3 jpg files. My client must run. Also My server.cs get files on stream: Client.cs: private void btn_send2_Click(object sender, EventArgs e) { string[] paths= null; paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); byte[] Dizi; TcpClient Gonder = new TcpClient("127.0.0.1", 51124); FileStream Dosya; FileInfo Dos; NetworkStream Akis; foreach (string path in paths) { Dosya = new FileStream(path , FileMode.OpenOrCreate); Dos = new FileInfo(path ); Dizi = new byte[(int)Dos.Length]; Dosya.Read(Dizi, 0, (int)Dos.Length); Akis = Gonder.GetStream(); Akis.Write(Dizi, 0, (int)Dosya.Length); Gonder.Close(); Akis.Flush(); Dosya.Close(); } } Also i have Server.cs void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; // string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); // burasi degisecek string pathfolder = saveFileDialog1.FileName; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } Please look Client.cs: icollected all files from "c:\folder" paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); My Server.cs how to get all files from stream?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >