Search Results

Search found 43856 results on 1755 pages for 'header files'.

Page 332/1755 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files

    - by yegor
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • I have two 'WsusContent' folders. One has about 55gb of files in it. How do I clear it out?

    - by MrVimes
    I ran the WSUS cleanup wizard in the WSUS MMC snap-in but this made no difference. I think the files are associated with System Center Essentials because they are in a folder called "data\SCE2007\WsusContent" I've looked for a similar cleanup wizard in System Center Essentials but can't find one. Is there a way to clear these files out that doesn't cause issues for SCE or anything else? FYI I've got both SCE and standalone WSUS settings already set to minimize the amount of updates downloaded from Microsoft to only those that are needed.

    Read the article

  • Regarding Unix Move Command

    - by user38993
    I need to write an Unix Shell Script tran.sh that moves the csv input files from /exp/files folder to /exp/ready directory. The csv input files are written to /exp/files folder by an FTP server whose behavior I cannot trivially change. In tran.sh shell script I need to ensure before doing a move of that csv input file from /exp/files directory no longer any other process is writing to the file. How can I do it.

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files)

    - by user11955
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • Zipping folder with absolute path without keeping tree of folders

    - by Preston
    I am attempting to use the zip -r command to zip a folder which includes two files. I need to pass the absolute path of the folder with two files (/path/to/my/files/), which is causing all of the folders to be zipped with it, where as I only need the last folder (files/) and its contents to be zipped, so that when the file is unzipped, there is only one folder and the two files within it. How can I modify the command to be able to pass the absolute paths in the arguments while keeping only the last folder?

    Read the article

  • Where to find a tag based file manager?

    - by samoanbiscuit
    I'm a CS uni student in a country with limited bandwidth, and I find that I download files (particularly install files) for reuse\reinstallation\giving to my friends.. I've kept these files in different folders, and now find it extremely hard to find them, and keeping track of obselete files (the almost weekly releases of iTunes versions, or latest 7zip release or the .Net framework installers). What I would like to find is a file manager that could support tags, so instead of hierachical folders, i could find\view files according to their tags...

    Read the article

  • Augeas - create new ini section

    - by Tim Brigham
    I have a config file in augeas using a custom lens that outputs the data as follows. /files/opt/../server.conf/target[1] = "general" /files/opt/../server.conf/target[1]/serverName = "XXX" /files/opt/../server.conf/target[1]/guid = "XXX0XXX" /files/opt/../server.conf/target[2] = "sslConfig" /files/opt/../server.conf/target[2]/sslKeysfilePassword = "$1$XXXXX" This works well - some of the target names contain colons, etc so I need to use the target[x] format. What is the correct ins syntax to create a new section in my INI using this syntax?

    Read the article

  • Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken;

    - by Elad Benda
    I have just followed this tutorial: https://developers.google.com/drive/quickstart-android so I don't see a reason for duplicated libs in my project. I have added the drive Client lib via Google plugin for eclipse When I build my android app with this manifest <uses-sdk android:minSdkVersion="15" android:targetSdkVersion="16" /> <uses-permission android:name="android.permission.READ_CALENDAR" /> <uses-permission android:name="android.permission.WRITE_CALENDAR" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.GET_ACCOUNTS"/> <uses-permission android:name="android.permission.INTERNET" /> <application android:icon="@drawable/todo" android:label="@string/app_name" > <activity android:name=".TodosOverviewActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TodoDetailActivity" android:windowSoftInputMode="stateVisible|adjustResize" > <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <data android:mimeType="image/*" /> </intent-filter> </activity> <provider android:name=".contentprovider.MyTodoContentProvider" android:authorities="de.vogella.android.todos.contentprovider" > </provider> </application> I get the following error: [2013-10-27 00:43:58 - Dex Loader] Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; [2013-10-27 00:43:58 - de.vogella.android.todos] Conversion to Dalvik format failed: Unable to execute dex: Multiple dex files define Lcom/google/api/client/auth/oauth/AbstractOAuthGetToken; how can I fix this?

    Read the article

  • Bash Templating: How to build configuration files from templates with Bash?

    - by FractalizeR
    Hello. I'm writting a script to automate creating configuration files for Apache and PHP for my own webserver. I don't want to use any GUIs like CPanel or ISPConfig. I have some templates of Apache and PHP configuration files. Bash script needs to read templates, make variable substitution and output parsed templates into some folder. What is the best way to do that? I can think of several ways. Which one is the best or may be there are some better ways to do that? I want to do that in pure Bash (it's easy in PHP for example) 1)http://stackoverflow.com/questions/415677/how-to-repace-variables-in-a-nix-text-file template.txt: the number is ${i} the word is ${word} script.sh: #!/bin/sh #set variables i=1 word="dog" #read in template one line at the time, and replace variables #(more natural (and efficient) way, thanks to Jonathan Leffler) while read line do eval echo "$line" done < "./template.txt" BTW, how do I redirect output to external file here? Do I need to escape something if variables contain, say, quotes? 2) Using cat & sed for replacing each variable with it's value: Given template.txt: The number is ${i} The word is ${word} Command: cat template.txt | sed -e "s/\${i}/1/" | sed -e "s/\${word}/dog/" Seems bad to me because of the need to escape many different symbols and with many variables the line will be tooooo long. Can you think of some other elegant and safe solution?

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Better viewing of postfix mail queue files than postcat?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • how to compare the two xml files and update the one xml file in java?

    - by prasad.gai
    I have created two xml files from those xml file i would like to compare one xml file with another xml file and update the xml file. I have created an xml file as follows: main.xml <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#00BFFF"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <LinearLayout android:id="@+id/linearLayout1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" android:visibility="visible"> </LinearLayout> <LinearLayout android:id="@+id/linearLayout2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" android:visibility="visible"> </LinearLayout> </LinearLayout> </ScrollView> properties.xml <?xml version='1.0' encoding='utf-8' standalone='yes' ?> <map> <int name="linearLayout1" value="8" /> <int name="linearLayout2" value="0" /> </map> from the above two xml files i would like to compare with LinearLayout android:id ="@id/LinearLayout1" from main.xml with int name ="linearLayout1" from properties.xml. from the above properties.xml file where name ="linearLayout1" and value ="8" then i would like to update the main.xml file where as LinearLayout android:id="@id/linearLayout1" propertie value as android:visibility="gone" From the above xml file how can i update main.xml file by comparing properties.xml? please any body help me.....

    Read the article

  • How do I add programmatically-generated new files to source control?

    - by Alex Basson
    This is something I've never really understood about source control, specifically Subversion (the only source control I've ever used, which isn't saying much). I'm considering moving to git or Mercurial, so if that affects the answer to my question, please indicate as such. Ok. As I understand it, every time I create a new file, I have to tell SVN about it, so that it knows to add it to the repository and place it under control. Something like: svn add newfile That's fine if I'm the one creating the file: I know I created it, I know its name, I know where it lives, so it's easy to tell SVN about it. But now suppose I'm using a framework of some kind, like Rails, Django, Symfony, etc., and suppose I've already done the initial commit. All of these frameworks create new files programmatically, often many at once, in different directories, etc. etc. How do I tell the source control about these new files? Do I have to hunt each one of them down individually and add them? Is there an easier way? (Or am I possibly misunderstanding something fundamental about source control?)

    Read the article

  • Why won't SWFUpload execute the upload.aspx code, and why is it saving all files to the root directo

    - by Nathan Fast
    I am using SWFUpload v2.2. In IE (8):   If I upload a very tiny file (16kb):      1) The file appears in the root directory where upload.aspx is located.      2) Page_Load on upload.aspx.cs is executed.      3) The file is actually processed by the Page_Load procedure, and the processed file is saved in the correct location.   If I upload a normal file (1.5 MB):      1) The file appears in the root directory where upload.aspx is located. In Firefox (3.5.7):   No matter what size the file is, it:      1) The file appears in the root directory where upload.aspx is located. I have maxRequestLength="30000" executionTimeout="3000" in the web.config just to be sure. In the setting object for the constructor I have:   file_size_limit: "10 MB",   file_types: ".",   file_types_description: "All Files", So my questions are:   How is the file getting saved in the root directory (and why)?   Why does Page_Load only execute when I am using IE and uploading very tiny files?

    Read the article

  • How should I use .ico files in a Winforms Application ?

    - by Brann
    I'm developing a WinForms c# 3.0 application. Our designer created quite a lot of .ico files containing all the needed art. The choice of .ico was made because quite often, the same image is needed in several places in different dimensions. Now, it seems .ico files are really annoying to use in visual studio. The only way to use those images seems to be through images list (which aren't supported by all controls). Compared to other resources, you can't write this : foo.Image = global::RFQHUB.RFQHUBClient.Properties.Resources.foo; // Cannot implicitly convert type 'System.Drawing.Icon' to 'System.Drawing.Image' Here are the options I'm considering : create ImageLists of all possible sizes referencing all my icons in my main window. Link these ImageLists from other windows and find a way to export Image objects from the ImageList when I can't use it directly ; since ImageList contains a Draw() method, this should probably be possible. convert all the x.ico I've got in several x16.gif ...x48.gif, and use those through resources. I'd be interested to know if some people have been successfully using .ico resources in a Winform application. In so, how did you set up things ?

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • Windows 7 Create Folder in "Program Files" failing in C# code even thought I have admin rights!

    - by Shiva
    Hi, I am unable to create a file under "program files" folder on my Windows 7 64-bit machine in VS 2008 WPF C# code. The error I get on the following code myFile = File.Create(logFile); is the following. (this is the innerException stack trace). at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.File.Create(String path) at MyFirm.MyPricingApp.UI.App.InitializeLogging() in C:\Projects\MyPricingApp\App.xaml.cs:line 150 at MyFirm.MyPricingApp.UI.App.Application_Startup(Object sender, StartupEventArgs e) in C:\Projects\MyPricingApp\App.xaml.cs:line 38 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctor>b__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) this seems like it has something to do with UAC in Windows 7, because why else would I get this since my user is already Admin on the machine ?! Also since the WinIOError has SECURITY_ATTRIBUTES, I am thinking this has something to do with a "new way" security is handled in Windows 7. I tried to browse to the "Program Files" folder, under which the log folder and file were to be created. I can create the folder by hand, but when i try to create the file, I get the similar "Access Denied" exception.

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Concise description of how .h and .m files interact in objective c?

    - by RJ86
    I have just started learning objective C and am really confused how the .h and .m files interact with each other. This simple program has 3 files: Fraction.h #import <Foundation/NSObject.h> @interface Fraction : NSObject { int numerator; int denominator; } - (void) print; - (void) setNumerator: (int) n; - (void) setDenominator: (int) d; - (int) numerator; - (int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end Main.m #import <stdio.h> #import "Fraction.h" int main(int argc, char *argv[]) { Fraction *frac = [[Fraction alloc] init]; [frac setNumerator: 1]; [frac setDenominator: 3]; printf( "The fraction is: " ); [frac print]; printf( "\n" ); [frac release]; return 0; } From what I understand, the program initially starts running the main.m file. I understand the basic C concepts but this whole "class" and "instance" stuff is really confusing. In the Fraction.h file the @interface is defining numerator and denominator as an integer, but what else is it doing below with the (void)? and what is the purpose of re-defining below? I am also quite confused as to what is happening with the (void) and (int) portions of the Fraction.m and how all of this is brought together in the main.m file. I guess what I am trying to say is that this seems like a fairly easy program to learn how the different portions work with each other - could anyone explain in non-tech jargon?

    Read the article

  • Can I use Eclipse JDT to create new 'working copies' of source files in memory only?

    - by RYates
    I'm using Eclipse JDT to build a Java refactoring platform, for exploring different refactorings in memory before choosing one and saving it. I can create collections of working copies of the source files, edit them in memory, and commit the changes to disk using the JDT framework. However, I also want to generate new 'working copy' source files in memory as part of refactorings, and only create the corresponding real source file if I commit the working copy. I have seen various hints that this is possible, e.g. http://www.jarvana.com/jarvana/view/org/eclipse/jdt/doc/isv/3.3.0-v20070613/isv-3.3.0-v20070613.jar!/guide/jdt%5Fapi%5Fmanip.htm says "Note that the compilation unit does not need to exist in the Java model in order for a working copy to be created". So far I have only been able to create a new real file, i.e. ICompilationUnit newICompilationUnit = myPackage.createCompilationUnit(newName, "package piffle; public class Baz{private int i=0;}", false, null); This is not what I want. Does anyone know how to create a new 'working copy' source file, that does not appear in my file system until I commit it? Or any other mechanism to achieve the same thing?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >