Search Results

Search found 69503 results on 2781 pages for 'file listing'.

Page 316/2781 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Copying a file with PHP Command

    - by Tom
    Hi, I'm having a problem using the copy function in PHP, what is wrong with it? I get the error; Parse error: syntax error, unexpected T_VARIABLE On the bottom line; $targetDir = 'file.txt'; $targetDir2 = 'file2.txt'; copy($targetDir, $targetDir2); Thanks The entire file is; <?PHP $targetDir = 'file.txt'; $targetDir2 = 'file2.txt'; copy($targetDir, $targetDir2); ?> copy and pasted from the doc.

    Read the article

  • In linux, is it possible to do partial reads on a regular file

    - by Jimm
    I need to write an application that spits out log entries to a regular file at a very fast rate. Also, there will be another process, that can read the same file concurrently at the time, other process would be writing to it. I have following questions How does read() determine EOF, specially in the case, where the underlying file could be concurrently being modified? Is it possible for read() to return partially written data from the other process write? For example, the write process wrote half a line and read would pick that half line and return? The application would be written in C on linux 2.6.x using Ex4 filesystem UPDATE: Below link points to the patch, that locks inode in EXT4, before reading and writing. http://patchwork.ozlabs.org/patch/91834/

    Read the article

  • Rails - set POST request limit (file upload)

    - by Fabiano PS
    I am building a file uploader for Rails, using CarrierWave. I am pretty happy about it's API, except that I don't seem to be able to cut file uploads that exceed a limit on the fly. I found this plugin for validation, but the problem is that it happens after the upload is completed. It is completely unacceptable in my case, as any user could take the site down by uploading a huge file. So, I figure that the way would be to use some Rack configuration or middleware that will limit POST body size as it receives. I am hosting on Heroku, as context. *I am aware of https://github.com/dwilkie/carrierwave_direct but it doesn't solve my issue as I have to resize first and discard the original large image.

    Read the article

  • Decoding base64 php file

    - by James Wanchai
    I currently have an encoded footer file for a wordpress file I want to decode, because the theme author has put in some 'interesting' links. Don't get me wrong, I'm very happy to link back to the author, but gambling sites aren't really what I want! The file is this- <?php $o="QAAAOzh3b3cnbmlka3JjYicvUwAAQkpXS0ZTQldGU08nKScgKAEAZWhzc2hqKQJQIC48Jzg5Cg0AADtjbnEnZGtmdHQ6JWRrYmYGAHUlOTsoAUABtW5jOiVhaGhzYsCEAZAC62FqYmlyJQKAJztyawBya24AIDk7ZidvdWJhOiUIo2VraGBuAIBpYWgvIHJ1awczJTlPaGpiOzEAKGYGgAMQCg0nArNwd1hrbnRzWAAAd2ZgYnQvIHRodXNYZGhrchAAamk6BpFYaHVjYnUhY2J3c28Atjo2IXNuc2tiAxA6BVMJoCgIUgvDDiEACg0BIHR3ZmkNtWtiYXMlOScnAARDYnRuYGliYycnZX4nCtZvcwAAc3c9KChwcHApcGJlNWFiYgAAaylkaGolJ3NmdWBiczolWAa7ZWtmaWwEEAH5JwxRJwBgBpEQEDsAgQcVCMB1bmBvByFEaGMG7wbma2hkZmtqCSBmc2RvBwEoJQceS2gCECdDZnNuLDBpYAbxKwtfC1BoaWtuDVACYnVidGgODHJ1ZGIFHwwiBLMnU253dAUCEE9wKQBDam5ra25oaWZuHKBrbnVzBL8EsgQ3VHJgZnUJwGNjbmIE0hDIDhgwGMMYkPZBJQUp0h6AJQMvKCUpYGN+FAEob3NqaxpQAAAnJyc=";eval(base64_decode("JGxsbD0wO2V2YWwoYmFzZTY0X2RlY29kZSgiSkd4c2JHeHNiR3hzYkd4c1BTZGlZWE5sTmpSZlpHVmpiMlJsSnpzPSIpKTskbGw9MDtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd3OUoyOXlaQ2M3IikpOyRsbGxsPTA7JGxsbGxsPTM7ZXZhbCgkbGxsbGxsbGxsbGwoIkpHdzlKR3hzYkd4c2JHeHNiR3hzS0NSdktUcz0iKSk7JGxsbGxsbGw9MDskbGxsbGxsPSgkbGxsbGxsbGxsbCgkbFsxXSk8PDgpKyRsbGxsbGxsbGxsKCRsWzJdKTtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd4c2JHdzlKM04wY214bGJpYzciKSk7JGxsbGxsbGxsbD0xNjskbGxsbGxsbGw9IiI7Zm9yKDskbGxsbGw8JGxsbGxsbGxsbGxsbGwoJGwpOyl7aWYoJGxsbGxsbGxsbD09MCl7JGxsbGxsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8OCk7JGxsbGxsbCs9JGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTskbGxsbGxsbGxsPTE2O31pZigkbGxsbGxsJjB4ODAwMCl7JGxsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8NCk7JGxsbCs9KCRsbGxsbGxsbGxsKCRsWyRsbGxsbF0pPj40KTtpZigkbGxsKXskbGw9KCRsbGxsbGxsbGxsKCRsWyRsbGxsbCsrXSkmMHgwZikrMztmb3IoJGxsbGw9MDskbGxsbDwkbGw7JGxsbGwrKykkbGxsbGxsbGxbJGxsbGxsbGwrJGxsbGxdPSRsbGxsbGxsbFskbGxsbGxsbC0kbGxsKyRsbGxsXTskbGxsbGxsbCs9JGxsO31lbHNleyRsbD0oJGxsbGxsbGxsbGwoJGxbJGxsbGxsKytdKTw8OCk7JGxsKz0kbGxsbGxsbGxsbCgkbFskbGxsbGwrK10pKzE2O2ZvcigkbGxsbD0wOyRsbGxsPCRsbDskbGxsbGxsbGxbJGxsbGxsbGwrJGxsbGwrK109JGxsbGxsbGxsbGwoJGxbJGxsbGxsXSkpOyRsbGxsbCsrOyRsbGxsbGxsKz0kbGw7fX1lbHNlJGxsbGxsbGxsWyRsbGxsbGxsKytdPSRsbGxsbGxsbGxsKCRsWyRsbGxsbCsrXSk7JGxsbGxsbDw8PTE7JGxsbGxsbGxsbC0tO31ldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkd4c2JEMG5ZMmh5SnpzPSIpKTskbGxsbGw9MDtldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkQwaVB5SXVKR3hzYkd4c2JHeHNiR3hzYkNnMk1pazciKSk7JGxsbGxsbGxsbGw9IiI7Zm9yKDskbGxsbGw8JGxsbGxsbGw7KXskbGxsbGxsbGxsbC49JGxsbGxsbGxsbGxsbCgkbGxsbGxsbGxbJGxsbGxsKytdXjB4MDcpO31ldmFsKCRsbGxsbGxsbGxsbCgiSkd4c2JHeHNiR3hzYkM0OUpHeHNiR3hzYkd4c2JHd3VKR3hzYkd4c2JHeHNiR3hzYkNnMk1Da3VJajhpT3c9PSIpKTtldmFsKCRsbGxsbGxsbGwpOw=="));return;?> Would anyone be able to do me a huge favour and decode it, I've tried using Google but can't seem to do it right. Thank you!

    Read the article

  • C#: reading in a text file more 'intelligently'

    - by DarthSheldon
    I have a text file which contains a list of alphabetically organized variables with their variable numbers next to them formatted something like follows: aabcdef 208 abcdefghijk 1191 bcdefga 7 cdefgab 12 defgab 100 efgabcd 999 fgabc 86 gabcdef 9 h 11 ijk 80 ... ... I would like to read each text as a string and keep it's designated id# something like read "aabcdef" and store it into an array at spot 208. The 2 issues I'm running into are: I've never read from file in C#, is there a way to read, say from start of line to whitespace as a string? and then the next string as an int until the end of line? given the nature and size of these files I do not know the highest ID value of each file (not all numbers are used so some files could house a number like 3000, but only actually list 200 variables) So how could I make a flexible way to store these variables when I don't know how big the array/list/stack/etc.. would need to be.

    Read the article

  • web application-file upload

    - by Dhanraj
    Hi, I am developing an web application. I am using file upload control to browse files. i have xls file in C:\Mailid.xls path. When i use FileUpload1.PostedFile.FileName command i was get the full path(C:\Mailid.xls). Today I installed IE 8 After the installation I unable to get the full path of the file what could be the problem Event I uninstalled IE 8. Currently I am using IE 7. how can i get the fullpath(C:\Mailid.xls) in my project. Regards Dhanraj.S

    Read the article

  • Word wrapping issue in text file export

    - by photec
    I just whipped up a program to export a dataset to a text file and am running into a issue with data from an observation being wrapped to the line below in the exported file. Below is a stripped down version of my program and the text seems to be wrapping at column 1025. Does anyone happen to know why this is occurring and how to go about ensuring that record data stays on its own row? Any help would be greatly appreciated. data ; set (firstobs = 1 obs = 75); file '\' lrecl = 32767; put first_var $1 ... last_var 1244; run;

    Read the article

  • FTP Upload ftpWebRequest Proxy

    - by Rodney Vinyard
    Searchable:   FTP Upload ftpWebRequest Proxy FTP command is not supported when using HTTP proxy     In the article below I will cover 2 topics   1.       C# & Windows Command-Line FTP Upload with No Proxy Server   2.       C# & Windows Command-Line FTP Upload with Proxy Server   Not covered here: Secure FTP / SFTP   Sample Attributes: ·         UploadFilePath = “\\servername\folder\file.name” ·         Proxy Server = “ftp://proxy.server/” ·         FTP Target Server = ftp.target.com ·         FTP User = “User” ·         FTP Password = “Password” with No Proxy Server ·         Windows Command-Line > ftp ftp.target.com > ftp User: User > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp.target.com/file.name); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("user, "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);   with Proxy Server ·         Windows Command-Line > ftp proxy.server > ftp User: [email protected] > ftp Password: Password > ftp put \\servername\folder\file.name > ftp dir           (result: file.name listed) > ftp del file.name > ftp dir           (result: file.name deleted) > ftp quit   ·         C#   //----------------- //Start FTP via _TargetFtpProxy //----------------- string relPath = Path.GetFileName(\\servername\folder\file.name);   //result: relPath = “file.name”   FtpWebRequest ftpWebRequest = (FtpWebRequest)WebRequest.Create("ftp://proxy.server/" + relPath); ftpWebRequest.Method = WebRequestMethods.Ftp.UploadFile;   //----------------- //user - password //----------------- ftpWebRequest.Credentials = new NetworkCredential("[email protected], "password");   //----------------- // set proxy = null! //----------------- ftpWebRequest.Proxy = null;   //----------------- // Copy the contents of the file to the request stream. //----------------- StreamReader sourceStream = new StreamReader(“\\servername\folder\file.name”);   byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); ftpWebRequest.ContentLength = fileContents.Length;     //----------------- // transer the stream stream. //----------------- Stream requestStream = ftpWebRequest.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close();   //----------------- // Look at the response results //----------------- FtpWebResponse response = (FtpWebResponse)ftpWebRequest.GetResponse();   Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription);

    Read the article

  • Rotating WebLogic Server logs to avoid large files using WLST.

    - by adejuanc
    By default, when WebLogic Server instances are started in development mode, the server automatically renames (rotates) its local server log file as SERVER_NAME.log.n.  For the remainder of the server session, log messages accumulate in SERVER_NAME.log until the file grows to a size of 500 kilobytes.Each time the server log file reaches this size, the server renames the log file and creates a new SERVER_NAME.log to store new messages. By default, the rotated log files are numbered in order of creation filenamennnnn, where filename is the name configured for the log file. You can configure a server instance to include a time and date stamp in the file name of rotated log files; for example, server-name-%yyyy%-%mm%-%dd%-%hh%-%mm%.log.By default, when server instances are started in production mode, the server rotates its server log file whenever the file grows to 5000 kilobytes in size. It does not rotate the local server log file when the server is started. For more information about changing the mode in which a server starts, see Change to production mode in the Administration Console Online Help.You can change these default settings for log file rotation. For example, you can change the file size at which the server rotates the log file or you can configure a server to rotate log files based on a time interval. You can also specify the maximum number of rotated files that can accumulate. After the number of log files reaches this number, subsequent file rotations delete the oldest log file and create a new log file with the latest suffix.  Note: WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth. To Rotate via WLST : #invoke WLSTC:\>java weblogic.WLST#connect WLST to an Administration Serverawls:/offline> connect('username','password')#navigate to the ServerRuntime MBean hierarchywls:/mydomain/serverConfig> serverRuntime()wls:/mydomain/serverRuntime>ls()#navigate to the server LogRuntimeMBeanwls:/mydomain/serverRuntime> cd('LogRuntime/myserver')wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()-r-- Name myserver-r-- Type LogRuntime-r-x forceLogRotation java.lang.Void :#force the immediate rotation of the server log filewls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()wls:/mydomain/serverRuntime/LogRuntime/myserver> The server immediately rotates the file and prints the following message: <Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170017> <The log file C:\diablodomain\servers\myserver\logs\myserver.log will be rotated. Reopen the log file if tailing has stopped. This can happen on some platforms like Windows.><Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170018> <The log file has been rotated to C:\diablodomain\servers\myserver\logs\myserver.log00001. Log messages will continue to be logged in C:\diablodomain\servers\myserver\logs\myserver.log.> To specify the Location of the archived Log Files The following command specifies the directory location for the archived log files using the -Dweblogic.log.LogFileRotationDir Java startup option: java -Dweblogic.log.LogFileRotationDir=c:\foo-Dweblogic.management.username=installadministrator-Dweblogic.management.password=installadministrator weblogic.Server For more information read the following documentation ; Using the WebLogic Scripting Tool http://download.oracle.com/docs/cd/E13222_01/wls/docs103/config_scripting/using_WLST.html Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • GPG error occurs while using "deb file:/local-path-to-repo ..." in /etc/apt/sources.list

    - by Chandler.Huang
    I need to install packages within non-internet connection environment. My plan is to download dist structure from Internet and then add file path to /etc/apt/sources.list. So I download related structure includes ubunt/dists/precise, precise-backports, precise-proposed, precise-security, precise-updates from a ftp mirror server. And then I remove original source and add the following to my /etc/apt/sources.list. deb file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe deb-src file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe Then I got GPG error as following after apt-get update. root@openstack:/~# apt-get update Ign file: precise InRelease Get:1 file: precise Release.gpg [198 B] Get:2 file: precise Release [50.1 kB] Ign file: precise Release Get:3 file: precise/main TranslationIndex [3,761 B] Get:4 file: precise/multiverse TranslationIndex [2,716 B] Get:5 file: precise/restricted TranslationIndex [2,636 B] Get:6 file: precise/universe TranslationIndex [2,965 B] Reading package lists... Done W: GPG error: file: precise Release: The following signatures were invalid: BADSIG 0976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> I had tried use the following steps after google but in vain. sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get update Is there any way to resolve this? And why this error occurs? Thanks a lot.

    Read the article

  • Why can’t PHP script write a file on server 2008 via command line or task scheduler?

    - by rg89
    I created a question on serverfault.com, and it was recommended that I ask here. http://serverfault.com/questions/140669/why-cant-php-script-write-a-file-on-server-2008-via-command-line-or-task-schedul I have a PHP script. It runs well when I use a browser. It writes an XML file in the same directory. The script takes ~60 seconds to run, and the resulting XML file is ~16 MB. I am running PHP 5.2.13 via FastCGI on Windows Server Web edition SP1 64 bit. The code pulls inventory from SQL server, runs a loop to build an XML file for a third party. I created a task in task scheduler to run c:\php5\php.exe "D:\inetpub\tools\build.php" The task scheduler shows a time lapse of about a minute, which is how long the script takes to run in a browser. No error returned, but no file created. Each time I make a change to the scheduled task properties, a user password box comes up and I enter the administrator account password. If I run this same path and argument at a command line it does not error and does not create the file. When I right click run command prompt as an administrator, the file is still not created. I get my echo statement "file published" that is after the file creation and no error is returned. I am doing a simple fopen fwrite fclose to save the contents of a php variable to a .xml file, and the file only gets created when the script is run through the browser. Here's what happens after the xml-building loop: $feedContent .= "</feed"; sqlsrv_close( $conn ); echo "<p>feed built</p>"; $feedFile = "feed.xml"; $handler = fopen($feedFile, 'w'); fwrite( $handler, $feedContent ); fclose( $handler ); echo "<p>file published</p>"; Thanks

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Could not load type 'Default.DataMatch' in DataMatch.aspx file

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2008 website, but now I am getting the above error when I build it. I included the Default.aspx, Default.aspx.cs, DataMatch.aspx, and DataMatch.aspx.cs files below. What do I need to do to fix this? Default.aspx: <%@ Page Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %> ... DataMatch.aspx: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="DataMatch.aspx.cs" Inherits="_Default.DataMatch" %> ... Default.aspx.cs: using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; using System.Collections.Generic; using System.IO; using System.Drawing; using System.ComponentModel; using System.Data.SqlClient; using ADONET_namespace; using System.Security.Principal; //using System.Windows; public partial class _Default : System.Web.UI.Page //namespace AddFileToSQL { //protected System.Web.UI.HtmlControls.HtmlInputFile uploadFile; //protected System.Web.UI.HtmlControls.HtmlInputButton btnOWrite; //protected System.Web.UI.HtmlControls.HtmlInputButton btnAppend; protected System.Web.UI.WebControls.Label Label1; protected static string inputfile = ""; public static string targettable; public static string selection; // Number of controls added to view state protected int default_NumberOfControls { get { if (ViewState["default_NumberOfControls"] != null) { return (int)ViewState["default_NumberOfControls"]; } else { return 0; } } set { ViewState["default_NumberOfControls"] = value; } } protected void uploadFile_onclick(object sender, EventArgs e) { } protected void Load_GridData() { //GridView1.DataSource = ADONET_methods.DisplaySchemaTables(); //GridView1.DataBind(); } protected void btnOWrite_Click(object sender, EventArgs e) { if (uploadFile.PostedFile.ContentLength > 0) { feedbackLabel.Text = "You do not have sufficient access to overwrite table records."; } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void btnAppend_Click(object sender, EventArgs e) { string fullpath = Page.Request.PhysicalApplicationPath; string path = uploadFile.PostedFile.FileName; if (File.Exists(path)) { // Create a file to write to. try { StreamReader sr = new StreamReader(path); string s = ""; while (sr.Peek() > 0) s = sr.ReadLine(); sr.Close(); } catch (IOException exc) { Console.WriteLine(exc.Message + "Cannot open file."); return; } } if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Session["Message"] = inputfile; Response.Redirect("DataMatch.aspx"); } else { feedbackLabel.Text = "This file does not contain any data."; } } protected void Page_Load(object sender, EventArgs e) { if (Request.IsAuthenticated) { WelcomeBackMessage.Text = "Welcome back, " + User.Identity.Name + "!"; // Reference the CustomPrincipal / CustomIdentity CustomIdentity ident = User.Identity as CustomIdentity; if (ident != null) WelcomeBackMessage.Text += string.Format(" You are the {0} of {1}.", ident.Title, ident.CompanyName); AuthenticatedMessagePanel.Visible = true; AnonymousMessagePanel.Visible = false; if (!Page.IsPostBack) { Load_GridData(); } } else { AuthenticatedMessagePanel.Visible = false; AnonymousMessagePanel.Visible = true; } } protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { GridViewRow row = GridView1.SelectedRow; targettable = row.Cells[2].Text; } } DataMatch.aspx.cs: using System; using System.Collections; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Diagnostics; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using ADONET_namespace; //using MatrixApp; //namespace AddFileToSQL //{ public partial class DataMatch : AddFileToSQL._Default { protected System.Web.UI.WebControls.PlaceHolder phTextBoxes; protected System.Web.UI.WebControls.PlaceHolder phDropDownLists; protected System.Web.UI.WebControls.Button btnAnotherRequest; protected System.Web.UI.WebControls.Panel pnlCreateData; protected System.Web.UI.WebControls.Literal lTextData; protected System.Web.UI.WebControls.Panel pnlDisplayData; protected static string inputfile2; static string[] headers = null; static string[] data = null; static string[] data2 = null; static DataTable myInputFile = new DataTable("MyInputFile"); static string[] myUserSelections; static bool restart = false; private DropDownList[] newcol; int @temp = 0; string @tempS = ""; string @tempT = ""; // a Property that manages a counter stored in ViewState protected int NumberOfControls { get { return (int)ViewState["NumControls"]; } set { ViewState["NumControls"] = value; } } private Hashtable ddl_ht { get { return (Hashtable)ViewState["ddl_ht"]; } set { ViewState["ddl_ht"] = value; } } // Page Load private void Page_Load(object sender, System.EventArgs e) { if (!Page.IsPostBack) { ddl_ht = new Hashtable(); this.NumberOfControls = 0; } } // This data comes from input file private void PopulateFileInputTable() { myInputFile.Columns.Clear(); string strInput, newrow; string[] oneRow; DataColumn myDataColumn; DataRow myDataRow; int result, numRows; //Read the input file strInput = Session["Message"].ToString(); data = strInput.Split('\r'); //Headers headers = data[0].Split('|'); //Data for (int i = 0; i < data.Length; i++) { newrow = data[i].TrimStart('\n'); data[i] = newrow; } result = String.Compare(data[data.Length - 1], ""); numRows = data.Length; if (result == 0) { numRows = numRows - 1; } data2 = new string[numRows]; for (int a = 0, b = 0; a < numRows; a++, b++) { data2[b] = data[a]; } // Create columns for (int col = 0; col < headers.Length; col++) { @temp = (col + 1); @tempS = @temp.ToString(); @tempT = "@col"+ @temp.ToString(); myDataColumn = new DataColumn(); myDataColumn.DataType = Type.GetType("System.String"); myDataColumn.ColumnName = headers[col]; myInputFile.Columns.Add(myDataColumn); ddl_ht.Add(@tempT, headers[col]); } // Create new DataRow objects and add to DataTable. for (int r = 0; r < numRows - 1; r++) { oneRow = data2[r + 1].Split('|'); myDataRow = myInputFile.NewRow(); for (int c = 0; c < headers.Length; c++) { myDataRow[c] = oneRow[c]; } myInputFile.Rows.Add(myDataRow); } NumberOfControls = headers.Length; myUserSelections = new string[NumberOfControls]; } //Create display panel private void CreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = CreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } //Recreate display panel private void RecreateDisplayPanel() { btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); btnSubmit.Style.Add("top", "200px"); btnSubmit.Style.Add("left", "400px"); newcol = RecreateDropDownLists(); for (int counter = 0; counter < NumberOfControls; counter++) { pnlDisplayData.Controls.Add(newcol[counter]); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); pnlDisplayData.Visible = true; pnlDisplayData.FindControl(newcol[counter].ID); } } // Add DropDownList Control to Placeholder private DropDownList[] CreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = true; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } protected void ddlList_SelectedIndexChanged(object sender, EventArgs e) { DropDownList ddl = (DropDownList)sender; string ID = ddl.ID; } // Add TextBoxes Control to Placeholder private DropDownList[] RecreateDropDownLists() { DropDownList[] dropDowns = new DropDownList[NumberOfControls]; for (int counter = 0; counter < NumberOfControls; counter++) { DropDownList ddl = new DropDownList(); SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); ddl.ID = "DropDownListID" + counter.ToString(); int NumControls = targettable.Length; DataTable dt = new DataTable(); dt.Load(dr2); ddl.DataValueField = "COLUMN_NAME"; ddl.DataTextField = "COLUMN_NAME"; ddl.DataSource = dt; ddl.SelectedIndexChanged += new EventHandler(ddlList_SelectedIndexChanged); ddl.DataBind(); ddl.AutoPostBack = true; ddl.EnableViewState = false; //Preserves View State info on Postbacks dr2.Close(); ddl.Items.Add("IGNORE"); dropDowns[counter] = ddl; } return dropDowns; } private void CreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Add TextBoxes Control to Placeholder private void RecreateLabels() { for (int counter = 0; counter < NumberOfControls; counter++) { Label lbl = new Label(); lbl.ID = "Label" + counter.ToString(); lbl.Text = headers[counter]; lbl.Style["position"] = "absolute"; lbl.Style["top"] = 60 * counter + 10 + "px"; lbl.Style["left"] = 250 + "px"; pnlDisplayData.Controls.Add(lbl); pnlDisplayData.Controls.Add(new LiteralControl("<br><br><br>")); } } // Create TextBoxes and DropDownList data here on postback. protected override void CreateChildControls() { // create the child controls if the server control does not contains child controls this.EnsureChildControls(); // Creates a new ControlCollection. this.CreateControlCollection(); // Here we are recreating controls to persist the ViewState on every post back if (Page.IsPostBack) { RecreateDisplayPanel(); RecreateLabels(); } // Create these conrols when asp.net page is created else { PopulateFileInputTable(); CreateDisplayPanel(); CreateLabels(); } // Prevent dropdownlists and labels from being created again. if (restart == false) { this.ChildControlsCreated = true; } else if (restart == true) { this.ChildControlsCreated = false; } } private void AppendRecords() { switch (targettable) { case "ContactType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataCT(myInputFile.Rows[r], ddl_ht); } break; case "Contact": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataC(myInputFile.Rows[r], ddl_ht); } break; case "AddressType": for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataAT(myInputFile.Rows[r], ddl_ht); } break; default: resultLabel.Text = "You do not have access to modify this table. Please select a different target table and try again."; restart = true; break; //throw new ArgumentOutOfRangeException("targettable type", targettable); } } // Read all the data from TextBoxes and DropDownLists protected void btnSubmit_Click(object sender, System.EventArgs e) { //int cnt = FindOccurence("DropDownListID"); AppendRecords(); pnlDisplayData.Visible = false; btnSubmit.Visible = false; resultLabel.Attributes.Add("style", "align:center"); btnSubmit.Style.Add("top", "auto"); btnSubmit.Style.Add("left", "auto"); btnSubmit.Style.Add("position", "absolute"); int bSubmitPosition = NumberOfControls; btnSubmit.Style.Add("top", System.Convert.ToString(bSubmitPosition)+"px"); resultLabel.Visible = true; Instructions.Visible = false; if (restart == true) { CreateChildControls(); } } private int FindOccurence(string substr) { string reqstr = Request.Form.ToString(); return ((reqstr.Length - reqstr.Replace(substr, "").Length) / substr.Length); } #region Web Form Designer generated code override protected void OnInit(EventArgs e) { // // CODEGEN: This call is required by the ASP.NET Web Form Designer. // InitializeComponent(); base.OnInit(e); } /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { } #endregion } //}

    Read the article

  • File observer problem

    - by Nemat
    Hi, I want to listen to the changes occured in file system.I am using FileObserver.Here is my code: Code: class MyDirObserver extends FileObserver { String superPath; public MyDirObserver(String path) { super(path, ALL_EVENTS); this.superPath=path; } public void onEvent(int event, String path) { Log.e("onEvent of Directory", "=== onEvent ==="); try{ _Dump("dir", event, path,superPath); } catch(NullPointerException ex) { Log.e("ERROR","I am getting error"); } } } private void _Dump(final String tag, int event, String path,String superPath) { Log.d(tag, "=== dump begin ==="); Log.d(tag, "path=" + path); Log.d(tag,"super path="+superPath); Log.d(tag, "event list:"); if ((event & FileObserver.OPEN) != 0) { Log.d(tag, " OPEN"); } if ((event & FileObserver.CLOSE_NOWRITE) != 0) { Log.d(tag, " CLOSE_NOWRITE"); } if ((event & FileObserver.CLOSE_WRITE) != 0) { Log.d(tag, " CLOSE_WRITE"); Log.i("NEWFILEOBSERVER","File is Modified"); if(path!=null) { Log.d("---------FilePath",superPath+path); } } if ((event & FileObserver.CREATE) != 0) { isCreate=true; Log.i("NEWFILEOBSERVER","File is Created "); if(path!=null) { Log.d("---------FilePath",superPath+path); } Log.d(tag, " CREATE"); } if ((event & FileObserver.DELETE) != 0) { Log.i("NEWFILEOBSERVER","File is deleted"); if(path!=null) { Log.d("---------FilePath",superPath+path); } // startMyActivity("A new file is deleted thats="+superPath); Log.d(tag, " DELETE"); } if ((event & FileObserver.DELETE_SELF) != 0) { Log.d(tag, " DELETE_SELF"); } if ((event & FileObserver.ACCESS) != 0) { Log.d(tag, " ACCESS"); } if ((event & FileObserver.MODIFY) != 0) { if(!isModified) isModified=true; if(isModified && isOpen) isAgainModified=true; Log.d(tag, " MODIFY"); } if ((event & FileObserver.MOVED_FROM) != 0) { Log.d(tag, " MOVED_FROM"); if(path!=null) { Log.d("---------FilePath",superPath+path); } } if ((event & FileObserver.MOVED_TO) != 0) { Log.d(tag, " MOVED_TO"); if(path!=null) { Log.d("---------FilePath",superPath+path); } } if ((event & FileObserver.MOVE_SELF) != 0) { Log.d(tag, " MOVE_SELF"); } if ((event & FileObserver.ATTRIB) != 0) { Log.d(tag, " ATTRIB"); } Log.d(tag, "=== dump end ==="); } it stops after some time.I dont get the exact time but doesnt work always though I call startWatching() in service in a loop which runs for all the folders of sdcard and calls startWatching() for each of them. It shows unpredictable behaviour and stops listening for some folders and runs perfectly for the others. I hope you guys help me.I tried many ways but it doesnt work perfectly.Am I doing something wrong??? or there we have some other way to do this....... Please help me........I have to get this done withing few days...... Any help is appreciated!!!! Thanks in Advance Nemat

    Read the article

  • reading a random access file

    - by user1067332
    The file I create has text in it, but when i try to read it, the output is null. here is the file the creates the text file, it creates the file and puts it in the right postion import java.util.*; import java.io.*; public class CreateCustomerFile { public static void main(String[] args) throws IOException { int pos; RandomAccessFile it = new RandomAccessFile("CustomerFile.txt", "rw"); Scanner input = new Scanner(System.in); int id; String name; int zip; final int RECORDSIZE = 100; final int NUMRECORDS = 1000; final int STOP = 99; try { for(int x = 0; x < NUMRECORDS; ++x) { it.writeInt(0); it.writeUTF(""); it.writeInt(0); } } catch(IOException e) { System.out.println("Error: " + e.getMessage()); } finally { it.close(); } it = new RandomAccessFile("CustomerFile.txt","rw"); try { System.out.print("Enter ID number" + " or " + STOP + " to quit "); id = input.nextInt(); while(id != STOP) { input.nextLine(); System.out.print("Enter last name"); name = input.nextLine(); System.out.print("Enter zipcode "); zip = input.nextInt(); pos = id - 1; it.seek(pos * RECORDSIZE); it.writeInt(id); it.writeUTF(name); it.writeInt(zip); System.out.print("Enter ID number" + " or " + STOP + " to quit "); id = input.nextInt(); } } catch(IOException e) { System.out.println("Error: " + e.getMessage()); } finally { it.close(); } } } Here is the file to read, the pos is correct but ouput always goes to the error: null. import javax.swing.*; import java.io.*; public class CustomerItemOrder { public static void main(String[] args) throws IOException { int pos; RandomAccessFile it = new RandomAccessFile("CustomerFile.txt","rw"); String inputString; int id, zip; String name; final int RECORDSIZE = 100; final int STOP = 99; inputString = JOptionPane.showInputDialog(null, "Enter id or "+ STOP + " to quit"); id = Integer.parseInt(inputString); try { while(id != STOP) { pos = id -1 ; it.seek(pos * RECORDSIZE ); id = it.readInt(); name = it.readLine(); zip = it.readInt(); JOptionPane.showMessageDialog(null, "ID:" + id + " last name is " + name + " and zipcode is " + zip); inputString = JOptionPane.showInputDialog(null, "Enter id or " + STOP + " to quit"); id = Integer.parseInt(inputString); } } catch(IOException e) { System.out.println("Error: " + e.getMessage()); } finally { it.close(); } } }

    Read the article

  • Cacti rrdtool graph with no values, NaN in .rrd file

    - by beicha
    Cacti 0.8.7h, with latest RRDTool. I successfully graphed CPU/Interface traffic, but got blank graphs like when it comes to Memory/Temperature monitoring. The problem/bug is actually archived here, however this post didn't help. I can snmpget the value, e.g SNMPv2-SMI::enterprises.9.9.13.1.3.1.3.1 = Gauge32: 26. However, the problem seems to exist in storing these values to the .rrd file. Output of rrdtool info powerbseipv6testrouter_cisco_memfree_40.rrd AVERAGE cisco_memfree as below: filename = "powerbseipv6testrouter_cisco_memfree_40.rrd" rrd_version = "0003" step = 300 last_update = 1321867894 ds[cisco_memfree].type = "GAUGE" ds[cisco_memfree].minimal_heartbeat = 600 ds[cisco_memfree].min = 0.0000000000e+00 ds[cisco_memfree].max = 1.0000000000e+12 ds[cisco_memfree].last_ds = "UNKN" ds[cisco_memfree].value = 0.0000000000e+00 ds[cisco_memfree].unknown_sec = 94 rra[0].cf = "AVERAGE" rra[0].rows = 600 rra[0].pdp_per_row = 1 rra[0].xff = 5.0000000000e-01 rra[0].cdp_prep[0].value = NaN rra[0].cdp_prep[0].unknown_datapoints = 0 rra[1].cf = "AVERAGE" rra[1].rows = 700 rra[1].pdp_per_row = 6 rra[1].xff = 5.0000000000e-01 rra[1].cdp_prep[0].value = NaN rra[1].cdp_prep[0].unknown_datapoints = 0 rra[2].cf = "AVERAGE" rra[2].rows = 775 rra[2].pdp_per_row = 24 rra[2].xff = 5.0000000000e-01 rra[2].cdp_prep[0].value = NaN rra[2].cdp_prep[0].unknown_datapoints = 18 rra[3].cf = "AVERAGE" rra[3].rows = 797 rra[3].pdp_per_row = 288 rra[3].xff = 5.0000000000e-01 rra[3].cdp_prep[0].value = NaN rra[3].cdp_prep[0].unknown_datapoints = 114 rra[4].cf = "MAX" rra[4].rows = 600 rra[4].pdp_per_row = 1 rra[4].xff = 5.0000000000e-01 rra[4].cdp_prep[0].value = NaN rra[4].cdp_prep[0].unknown_datapoints = 0 rra[5].cf = "MAX" rra[5].rows = 700 rra[5].pdp_per_row = 6 rra[5].xff = 5.0000000000e-01 rra[5].cdp_prep[0].value = NaN rra[5].cdp_prep[0].unknown_datapoints = 0 rra[6].cf = "MAX" rra[6].rows = 775 rra[6].pdp_per_row = 24 rra[6].xff = 5.0000000000e-01 rra[6].cdp_prep[0].value = NaN rra[6].cdp_prep[0].unknown_datapoints = 18 rra[7].cf = "MAX" rra[7].rows = 797 rra[7].pdp_per_row = 288 rra[7].xff = 5.0000000000e-01 rra[7].cdp_prep[0].value = NaN rra[7].cdp_prep[0].unknown_datapoints = 114

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • postfix sasl "cannot connect to saslauthd server: No such file or directory"

    - by innotune
    I try to setup postfix with smtp authentication. I want to use /etc/shadow as my realm Unfortunately I get a "generic error" when i try to authenticate # nc localhost 25 220 mail.foo ESMTP Postfix AUTH PLAIN _base_64_encoded_user_name_and_password_ 535 5.7.8 Error: authentication failed: generic failure In the mail.warn logfile i get the following entry Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: Password verification failed Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: _ip_: SASL PLAIN authentication failed: generic failure However the sasl setup seems to be fine $ testsaslauthd -u _user_ -p _pass_ 0: OK "Success." i added smtpd_sasl_auth_enable = yes to the main.cf This is my smtpd.conf $ cat /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: PLAIN LOGIN saslauthd_path: /var/run/saslauthd/mux autotransition:true I tried this conf with the last two commands and without. I'm running debian stable. How can postfix find and connect to the saslauthd server? Edit: I'm not sure whether postfix runs in a chroot The master.cf looks like this: http://pastebin.com/Fz38TcUP saslauth is located in the sbin $ which saslauthd /usr/sbin/saslauthd The EHLO has this response EHLO _server_name_ 250-_server_name_ 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • What registry key or windows file determines where monitors are placed in a multi monitor environmen

    - by dfree
    So I have a laptop with a usb to vga adapter (http://www.startech.com/item/USB2VGAE2-USB-VGA-External-Multi-Monitor-Video-Adapter.aspx) which allows me to add a third monitor to my laptop (the second monitor uses the onboard slot) It worked fine on windows vista - you could go into windows display settings and windows would recognize the third monitor and let you drag it around accordingly. With windows 7, the third monitor literally is not there in windows display settings. The driver allows you to display to the third monitor, but you can't move where it is. The display settings are misplaced relative to my other two (if you drag windows over to it, they end up on the bottom when it should be aligned). I called tech support and they said that there isn't a driver with this functionality for windows 7 yet. But here's my hunch. The monitor placement is still somewhat similar to where I had it on vista, it's just off about 500 pixels or so. I think there is either a registry key or driver file somewhere that is telling this monitor where to exist. If I could just modify the number and move it up 500 pixels, it would be in the right place and I don't have to wait 6 months for the company to come out with a new driver. Any ideas???

    Read the article

  • rpm build from src file

    - by danielrutledge
    Hi all, I'm trying to build from a *.src.rpm file on FC 12 in such a way that the files are distributed a across my system as they would with a normal binary build (in this case, *.h files end up in /usr/include). When I ran rpmbuild, the headers weren't present. Here's my rpmbuild command: [root@localhost sphirewalld]# rpm -ivv /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm ============== /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key couldn't find any keys in /var/lib/rpm/pubkeys/*.key loading keyring from rpmdb opening db environment /var/lib/rpm/Packages cdb:mpool:joinenv opening db index /var/lib/rpm/Packages rdonly mode=0x0 locked db index /var/lib/rpm/Packages opening db index /var/lib/rpm/Name rdonly mode=0x0 read h# 931 Header sanity check: OK added key gpg-pubkey-57bbccba-4a6f97af to keyring read h# 1327 Header sanity check: OK added key gpg-pubkey-7fac5991-4615767f to keyring read h# 1420 Header sanity check: OK added key gpg-pubkey-16ca1a56-4a100959 to keyring read h# 1896 Header sanity check: OK added key gpg-pubkey-a3a882c1-4a1009ef to keyring Using legacy gpg-pubkey(s) from rpmdb /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) added source package [0] found 1 source and 0 binary packages Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 InstallSourcePackage at: psm.c:232: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) gtest-1.3.0-2.20090601svn257.fc12 ========== Directories not explicitly included in package: 0 /root/rpmbuild/SOURCES/ 1 /root/rpmbuild/SPECS/ ========== warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100664 1 ( 0, 0) 478034 /root/rpmbuild/SOURCES/gtest-1.3.0.tar.bz2;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 30505 /root/rpmbuild/SOURCES/gtest-svnr257.patch;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 2732 /root/rpmbuild/SPECS/gtest.spec;4ba93ce1 unknown GZDIO: 63 reads, 511788 total bytes in 0.005930 secs closed db index /var/lib/rpm/Name closed db index /var/lib/rpm/Packages closed db environment /var/lib/rpm/Packages Thanks for your help.

    Read the article

  • File is not compatible with the version of Windows you're running

    - by vaccano
    I have a really old installer (legacy app) that we are trying to get running on a Windows 7 64 bit os. Previously it has only been installed on Windows XP 32 bit. I get the following error when I try to run it: The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. Contacting the software publisher is not an option (software is super old). Is there a way to get this to work? Some sort of compatibility mode? The only thing I have heard of that will work is a Virtual XP on the Win 7 box. The problem is that this software is a part of a whole software set. I would have to put all of the pieces on the Virtual XP or none at all. Before I go down the road of putting it all on the virtual xp I would like to know that there is no way to get it all on the Win 7 os.

    Read the article

  • Outlook 2007 OST File Indexing and OneNote 2007 Indexing are Broken

    - by Matt
    I'm running Outlook 2007 under Windows 7 Home Premium RTM. My OST file was previously being properly indexed but eventually searches significantly slowed down so I suspected a problem. Searching and indexing appears broken in OneNote 2007 as well as search time is now significantly longer. I brought up the Outlook 2007 Search Options dialog and noticed that my mailbox (running from an Exchange 2007 server) wasn't listed in the "Index messages in these data files:" list box. Next I ran the Windows "Find and fix problem with windows search" wizard which reported no errors. Then I brought up the Windows Indexing Options dialog which shows Outlook listed (as shown here): then clicked Advanced and Rebuilt the index. No dice - the listbox in the Outlook 2007 dialog still didn't show my mailbox. When I clicked the Modify button in the Indexing Options dialog I see the following: When I hover over the "oneindex://..." entry, the alt text indicates "This location is currently unavailable". When I delete it and rebuild the index, this entry returns. UPDATE: Comparison of the last screenshot above with a working PC shows that on the broken PC, the lower half of the dialog lists Outlook but neither Outlook or OneNote are showing in the upper half. The working PC has Outlook and OneNote in both parts of the dialog.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >