Search Results

Search found 251 results on 11 pages for 'baseline'.

Page 4/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Excel Data Organization: Array Formulas? Tables? Named Range?

    - by Joe Arasin
    I'm trying to make a huge Excel sheet reasonably maintainable, but it's huge in the "hundred-table-db" direction, rather than the "hundred-thousand-row-table" direction. I want to have a baseline data table that looks something like this: | Indicator | Units | 2010 | 2015 | 2020 | 2025 | Source | | GDP | $Gazillion | 300 | 350 | 400 | 450 | BLS | | Population | Millions | 350 | 400 | 450 | 500 | Census | | PetMonkeyPopulation | Thousands | 50 | 60 | 70 | 80 | SimiansRUs | And then be able to have another sheet that looks like: | | 2010 | 2015 | 2020 | 2025 | | MonkeysPerCapita | .1 | .2 | .3 | .4 | | MonkeysPerDollar | .01 | .01 | .01 | .01 | | GDPPerCapita | 300 | 400 | 450 | 600 | Is there some standard way to make this kind of thing maintainable?

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

  • MS Project 2010 Filter and Highlights

    - by claubervs
    I'd like to know if there is a way to highlight dates that differs from one another. I have two columns "Baseline Finish Date" and "Re-forecast Finish Date" and I would like to highlight the dates that for each task, is different in those columns. Meaning the tasks that suffered a re-forecast due to other circumstances, and does not equal to the original dates. I also would like a filter that does the same thing as above, showing only this different date tasks.

    Read the article

  • OBIEE 11.1.1 - (Updated) Best Practices Guide for Tuning Oracle® Business Intelligence Enterprise Edition (Whitepaper)

    - by Ahmed Awan
    Applies To: This whitepaper applies to OBIEE release 11.1.1.3, 11.1.1.5 and 11.1.1.6 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business Intelligence Enterprise Edition performance, you need to monitor, analyze, and tune all the Fusion Middleware / BI components. This guide describes the tools that you can use to monitor performance and the techniques for optimizing the performance of Oracle® Business Intelligence Enterprise Edition components. Click to Download the OBIEE Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this file) Disclaimer: All tuning information stated in this guide is only for orientation, every modification has to be tested and its impact should be monitored and analyzed. Before implementing any of the tuning settings, it is recommended to carry out end to end performance testing that will also include to obtain baseline performance data for the default configurations, make incremental changes to the tuning settings and then collect performance data. Otherwise it may worse the system performance.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Oracle Data Integrator Demo Webcast - Next Webcast - November 21st, 2013

    - by Javier Puerta
    Oracle Data Integrator Demo Webcast Next Webcast - November 21st, 2013 The ODI Product Management team will be hosting a demonstration webcast of Oracle Data Integrator regularly. We will be showing baseline functionality, and covering special topics as requested by our customers. Attendance to these webcasts is open to customers and partners Webcast Format The same format for the Webcast will be followed for each presentation: 05 minutes - Background & Overview 30 minutes - Introduction to ODI Features 15 minutes - Drill-Down into Special Topics 10 minutes - Questions and Answers Next Webcast Special Topics Oracle Data Integrator 12c Webcast Details Thursday November 21st 2013, 10:00 AM PST | 1:00 PM EST | 6:00 PM CET (1 hour) Web Conference Link: 594 942 837 (https://oracleconferencing.webex.com) Dial-In Number: AMER: 1-866-682-4770 (More Numbers) Phone Meeting ID/Passcode: 3096713/505638 More information on Oracle Data Integrator (ODI) Learn more about Oracle Data Integrator. Download Oracle Data Integrator 12c. Oracle Data Integrator Webcast Archive Copyright © 2013, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • What's New in OIC Analytics 11g?

    - by LuciaC
    Oracle Incentive Compensation (OIC) Analytics for Oracle Data Integrator (ODI) breaks down traditional front and back office silos bringing together sales performance data with those responsible for the sale and selling costs. It is a framework for Sales Performance Management  based on a data mart of key performance metrics regardless of whether or not these metrics are incentivized.Commissionable metrics are brought into OIC for commission calculation and brought back to enrich the performance data mart.  Executives and Product Marketing/Product Line Managers are provided with actionable sales performance analytics.  Incentivized salesreps and partners are provided with commission dashboards on a frequent basis to inform them how they are doing and how far they are from their goals.OIC Analytics is now certified with 11g and has additional features.  Oracle continues to invest in OIC Analytics but the baseline for the investments will be the 11gR1 certification version of OIC Analytics.  Read about what's new and the certification details in Doc ID 1590729.1.

    Read the article

  • Working with Primary Keys and Generators - Quickstart with NHibernate (Part 4)

    - by BobPalmer
    In this NHibernate tutorial, I'll be digging into the ID tag and Generator classes.  I had originally planned on finishing up a series on relationships (parent/child, etc.) but felt this would be an interesting topic for folks, and I also wanted to start integrating some of the current NHibernate reference. Since this article also includes some reference sections (and since I have not had a chance to check for every possible parameter value), I used the current reference as a baseline, and would welcome any feedback or technical updates that I can incorporate. You can find the entire article up on Google Docs at this link: http://docs.google.com/Doc?id=dg3z7qxv_24f3ch2rf7 As always, feedback, suggestions, and technical corrections are greatly appreciated! Enjoy! - Bob

    Read the article

  • Validation and Verification explanation (Boehm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Is programming a SubCulture? [closed]

    - by Trufa
    I was going through this article: http://en.wikipedia.org/wiki/Subculture Which got mee thinking is programming a subculture? After the a while I started thinking it really hard, and if you go really in depth this is a very complex and interesting question to ask. YOu can even ask yourself if (heavy) internet (social) users are an subculture and programmers a culture within. I think it might be an interesting discussion, hope you like it! NOTE: I linked the wiki article because it might be a good baseline, maybe you can base you answer on Ken Gelder´s proposal to distinguish subcultures. But it should be based on a little bit more that intuition. Thanks in advance! Trufa

    Read the article

  • Requirements Gathering Form?

    - by Daveo
    Do you use a standard form to gather requirements from a customer prior to making a website? If so what are the questions you ask? For example Purpose of Website: (sell products, reduce number of enquires, reduce work load, etc) Do you have you’re an existing logo? (can you send it via email) Do you have a preferred font for the website text? Do you have a colour theme for your logo, business cards, shop you would like to keep on the website Do you have photos you can provide of your Products? etc Ofcause it would need to be tweaked slightly for each customer but I was looking for a generic document as a baseline.

    Read the article

  • Very original V&V explanation (Bohm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirments baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • JDK 7u10 Released !

    - by user9148683
    Java Development Kit 7 Update 10 (JDK 7u10) release is now live! You can download it from Java SE Downloads page. The Java™ SE Development Kit 7, Update 10 Release Notes contains information about this release. The highlights of this release include: New Certified System Configurations - Mac OS X 10.8 and Windows 8 Security Feature Enhancements: The ability to disable any Java application from running in the browser. This mode can be set in the Java Control Panel or (on Microsoft Windows platform only) using a command-line install argument. New dialogs to warn you when the JRE is insecure (either expired or below the security baseline) and needs to be updated. The documentation at Setting the Level of Security for the Java Client and Java Control Panel explains these features in detail.

    Read the article

  • Home screen widget size for large screen or hdpi?

    - by kknight
    From Android widget screen guidelines, http://developer.android.com/guide/practices/ui_guidelines/widget_design.html, we know that, home screen has 4*4 cells, and in portrait orientation, each cell is 80 pixels wide by 100 pixels tall. I think these are for baseline HVGA screen. How about for large screens and hdpi screens, do they still have 4*4 cells for widget and each cell in portrait orientation is still 80 pixels * 100 pixels? Thanks.

    Read the article

  • Creating an image of an Android phone

    - by Danny
    Does anyone know how to create a 1:1 disk image of an Android phone? I am taking a forensics course and our final project involves creating tools to recover information from a suspect's Android based phone, however to do this we need to be able to create a 1:1 image of the phone's disk for baseline comparisons. Also would the image be loadable into AVD?

    Read the article

  • How to Create Own HashMap in Java?

    - by Taranfx
    I know about hashing algorithm and hashCode() to convert "key" into an equivalent integer (using some mathematically random expression) that is then compressed and stored into buckets. But can someone point me to an implementation or at least data structure that should be used as baseline? I haven't found it anywhere on the web.

    Read the article

  • Lifting a math symbol in LaTeX

    - by Chris Conway
    I'm using the symbol \otimes as a unary operator and it's vertical alignment doesn't seem right to me. It wants to sit a bit below the baseline: and I tried using \raisebox to fix this, e.g., \raisebox{1pt}{$\otimes$}: But \raisebox doesn't seem to be sensitive to subscripts. The operator stays the same size while everything around it shrinks: The problem, I think, is that \raisebox creates its own LR box, which doesn't inherit the settings in the surrounding math environment. Is there a version of \raisebox that "respects math"?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • VMWare ESX Updates - Which to Apply?

    - by Aaron Alton
    Wondering what more experienced ESX admins typically do... I just brought our ESX hosts up to 3.5 Update 5 (Yes, I know we're behind still). I then applied the "Critical Host Updates" baseline in VMWare update manager, and found that we're still short on 14 "critical updates". My question is, do most people go ahead and apply any update flagged as critical, or do they evaluate each update one-by-one to determine whether or not the issue that has been addressed is likely to affect them. In the SQL Server world (my alma mater, so to speak), we regularly apply service packs, and sometimes cumulative updates, but we only apply hotfixes when the issue that they are targeted towards affects us. Does the same logic hold fast in VMWare land?

    Read the article

  • Create a Windows Image for Deployment

    - by Kiranu
    In my company we have 8 laptops that we use to deploy on the field. These machines get assigned to a user for a certain time and run Windows Vista. All the machines are the same model. After the machine is returned, it is company policy to completely format the machine and go back to a predetermined configuration. Right now, what we do is we use the recovery utility in the laptop (we are a small shop so we use the OEM Windows license that the laptops come with) and manually uninstall and change the configuration in order to bring it to our baseline config. I know that there are ways to create an image that gets copied to the hard drive with a specific configuration and with specific software installed (thats what OEMs do right?). I'm looking for a tool or a tutorial or something that explains as simply as possible how to create such an image. Thanks a lot

    Read the article

  • best practice? Consumer data in MySQL on Amazon EBS (Elastic block store)

    - by jeff7091
    This is a consumer app, so I will care about storage costs - I don't want to have 5x copies of data lying about. The app shards very well, so I can use MySQL and not have scaling issues. Amazon EBS has a nice baseline+snapshot backup capability that uses S3. This should have a light footprint (in terms of storage cost). BUT: the magnolia.com story scares the crap out of me: basically flawless block-level backup of a corrupt DB or filesystem. Is there anything that is nearly as storage efficient as EBS at the MySQL level?

    Read the article

  • MBSA: failed to create empty document

    - by Scott
    We just purchased a Windows-based VPS that I've been tasked to set up as a web server. It's running Windows 2003 Server Datacenter Edition. I downloaded the latest version of Microsoft Baseline Security Analyzer and installed it, but when I try to run it I'm given an error message "Failed to create empty document." A search on Google gave the suggestion to change the path of the TEMP and TMP environment variables, which I tried but it made no difference. I also saw suggestions that this problem is caused by MMC, but I was just in MMC setting up a user account. What am I missing?

    Read the article

  • XML Schema For MBSA Reports

    - by Steve Hawkins
    I'm in the process of creating a script to run the command line version of Microsoft Baseline Security Analyzer (mbsacli.exe) against all of our servers. Since the MBSA reports are provided as XML documents, I should be able to write a script or small program to parse the XML looking for errors / issues. I'm wondering if anyone knows whether or not the XML schema for the MBSA reports is documented anywhere -- I have goggled this, and cant seem to find any trace of it. I've run across a few articles that address bits and pieces, but nothing that addresses the complete schema. Yes, I could just reverse engineer the XML, but I would like to understand a little more about the meaning of some of the tags. Thanks...

    Read the article

  • Where default settings are stored after applying GPO?

    - by tester5566
    When I apply a GPO that changes Service startup settings, where the default service startup settings are kept? And how can I read and modify them? The reason of the question is that I have a hundred of servers where most of services are disabled by a baseline GPO for hardening purposes. I want to relax this GPO by removing some services but I do not want that the service startup settings becomes default ones after the GPO is relaxed. So I want to keep the actual hardened state as a default state but allow local admins to change it if necessary. Thank you

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >