Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 200/335 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Which order to define getters and setters in? [closed]

    - by N.N.
    Is there a best practice for the order to define getters and setters in? There seems to be two practices: getter/setter pairs first getters, then setters (or the other way around) To illuminate the difference here is a Java example of getter/setter pairs: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public void setVar1(int var1) { this.var1 = var1; } public int getVar2() { return var2; } public void setVar2(int var2) { this.var2 = var2; } public int getVar3() { return var3; } public void setVar3(int var3) { this.var3 = var3; } } And here is a Java example of first getters, then setters: public class Foo { private int var1, var2, var3; public int getVar1() { return var1; } public int getVar2() { return var2; } public int getVar3() { return var3; } public void setVar1(int var1) { this.var1 = var1; } public void setVar2(int var2) { this.var2 = var2; } public void setVar3(int var3) { this.var3 = var3; } } I think the latter type of ordering is clearer both in code and in class diagrams but I do not know if that is enough to rule out the other type of ordering.

    Read the article

  • jquery small refactoring , json call

    - by Alexander Corotchi
    Hi everybody, I need you suggestion to make some refactoring in jquery code because now it looks terrible for me. I have 4 json calls but the difference it is just the URL call. EX: var userId = MyuserID; var perPage = '45'; var showOnPage = '45'; var tag = 'tag1'; var tag1 = 'tag2'; var tag2 = 'tagn'; $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tag + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo("#flickr ul"); }); $("#flickr a").fancybox(); }); $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tag1 + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo(".SetPinos1 ul"); }); $(".Sets a").fancybox(); }); $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tagn + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo(".SetPinos ul"); }); $(".Sets a").fancybox(); }); var tag is only one difference in this url : Can somebody help me not to repeat all this stuff ?? Sorry by so long garbage :(

    Read the article

  • Padding error - when using AES Encryption in Java and Decryption in C

    - by user234445
    Hi All, I have a problem while decrypting the xl file in rijndael 'c' code (The file got encrypted in Java through JCE) and this problem is happening only for the excel files types which having formula's. Remaining all file type encryption/decryption is happening properly. (If i decrypt the same file in java the output is coming fine.) While i am dumped a file i can see the difference between java decryption and 'C' file decryption. od -c -b filename(file decrypted in C) 0034620 005 006 \0 \0 \0 \0 022 \0 022 \0 320 004 \0 \0 276 4 005 006 000 000 000 000 022 000 022 000 320 004 000 000 276 064 0034640 \0 \0 \0 \0 \f \f \f \f \f \f \f \f \f \f \f \f 000 000 000 000 014 014 014 014 014 014 014 014 014 014 014 014 0034660 od -c -b filename(file decrypted in Java) 0034620 005 006 \0 \0 \0 \0 022 \0 022 \0 320 004 \0 \0 276 4 005 006 000 000 000 000 022 000 022 000 320 004 000 000 276 064 0034640 \0 \0 \0 \0 000 000 000 000 0034644 (the above is the difference between the dumped files) The following java code i used to encrypt the file. public class AES { /** * Turns array of bytes into string * * @param buf Array of bytes to convert to hex string * @return Generated hex string */ public static void main(String[] args) throws Exception { File file = new File("testxls.xls"); byte[] lContents = new byte[(int) file.length()]; try { FileInputStream fileInputStream = new FileInputStream(file); fileInputStream.read(lContents); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } try { KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(256); // 192 and 256 bits may not be available // Generate the secret key specs. SecretKey skey = kgen.generateKey(); //byte[] raw = skey.getEncoded(); byte[] raw = "aabbccddeeffgghhaabbccddeeffgghh".getBytes(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); byte[] encrypted = cipher.doFinal(lContents); cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(lContents); FileOutputStream f1 = new FileOutputStream("testxls_java.xls"); f1.write(original); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } I used the following file for decryption in 'C'. #include <stdio.h> #include "rijndael.h" #define KEYBITS 256 #include <stdio.h> #include "rijndael.h" #define KEYBITS 256 int main(int argc, char **argv) { unsigned long rk[RKLENGTH(KEYBITS)]; unsigned char key[KEYLENGTH(KEYBITS)]; int i; int nrounds; char dummy[100] = "aabbccddeeffgghhaabbccddeeffgghh"; char *password; FILE *input,*output; password = dummy; for (i = 0; i < sizeof(key); i++) key[i] = *password != 0 ? *password++ : 0; input = fopen("doc_for_logu.xlsb", "rb"); if (input == NULL) { fputs("File read error", stderr); return 1; } output = fopen("ori_c_res.xlsb","w"); nrounds = rijndaelSetupDecrypt(rk, key, 256); while (1) { unsigned char plaintext[16]; unsigned char ciphertext[16]; int j; if (fread(ciphertext, sizeof(ciphertext), 1, input) != 1) break; rijndaelDecrypt(rk, nrounds, ciphertext, plaintext); fwrite(plaintext, sizeof(plaintext), 1, output); } fclose(input); fclose(output); }

    Read the article

  • "Local transaction already has 1 non-XA Resource: cannot add more resources" error

    - by jthg
    After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources. But: What if I don't want a distributed transaction? What would I do if I want to start transactions on two different databases at the same time, but commit the transaction on one database and roll back the transaction on the other? I'm wondering how my code actually initiated a distributed transaction. It looks to me like I'm starting completely separate transactions on each of the databases. Info about the application: The application is an EJB running on a Sun Java Application Server 9.1 I use something like the following spring context to set up the hibernate session factories: <bean id="dbADatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbA"/> </bean> <bean id="dbASessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbADatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> <bean id="dbBDatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbB"/> </bean> <bean id="dbBSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbBDatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> Both of the JNDI resources are javax.sql.ConnectionPoolDatasoure's. They actually both point to the same connection pool, but we have two different JNDI resources because there's the possibility that the two groups of tables will move to different databases in the future. Then in code, I do: sessionA = dbASessionFactory.openSession(); sessionB = dbBSessionFactory.openSession(); sessionA.beginTransaction(); sessionB.beginTransaction(); The sessionB.beginTransaction() line produces the error in the title of this post - sometimes. I ran the app on two different sun application servers. On one runs it fine, the other throws the error. I don't see any difference in how the two servers are configured although they do connect to different, but equivalent databases. So the question is Why doesn't the above code start completely independent transactions? How can I force it to start independent transactions rather than a distributed transaction? What configuration could cause the difference in behavior between the two application servers? Thanks.

    Read the article

  • wpf exit thread automatically when application closes

    - by toni
    Hi, I have a main wpf window and one of its controls is a user control that I have created. this user control is an analog clock and contains a thread that update hour, minute and second hands. Initially it wasn't a thread, it was a timer event that updated the hour, minutes and seconds but I have changed it to a thread because the application do some hard work when the user press a start button and then the clock don't update so I changed it to a thread. COde snippet of wpf window: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:GParts" xmlns:Microsoft_Windows_Themes="clr-namespace:Microsoft.Windows.Themes assembly=PresentationFramework.Aero" xmlns:UC="clr-namespace:GParts.UserControls" x:Class="GParts.WinMain" Title="GParts" WindowState="Maximized" Closing="Window_Closing" Icon="/Resources/Calendar-clock.png" x:Name="WMain" > <...> <!-- this is my user control --> <UC:AnalogClock Grid.Row="1" x:Name="AnalogClock" Background="Transparent" Margin="0" Height="Auto" Width="Auto"/> <...> </Window> My problem is when the user exits the application then the thread seems to continue executing. I would like the thread finishes automatically when main windows closes. code snippet of user control constructor: namespace GParts.UserControls { /// <summary> /// Lógica de interacción para AnalogClock.xaml /// </summary> public partial class AnalogClock : UserControl { System.Timers.Timer timer = new System.Timers.Timer(1000); public AnalogClock() { InitializeComponent(); MDCalendar mdCalendar = new MDCalendar(); DateTime date = DateTime.Now; TimeZone time = TimeZone.CurrentTimeZone; TimeSpan difference = time.GetUtcOffset(date); uint currentTime = mdCalendar.Time() + (uint)difference.TotalSeconds; christianityCalendar.Content = mdCalendar.Date("d/e/Z", currentTime, false); // this was before implementing thread //timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed); //timer.Enabled = true; // The Work to perform ThreadStart start = delegate() { // With this condition the thread exits when main window closes but // despite of this it seems like the thread continues executing after // exiting application because in task manager cpu is very busy // while ((this.IsInitialized) && (this.Dispatcher.HasShutdownFinished== false)) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } Console.Write("Quit ok"); }; // Create the thread and kick it started! new Thread(start).Start(); } // this was before implementing thread void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } } // end class } // end namespace How can I exit correctly from thread automatically when main window closes and then application exits? Thanks very much!

    Read the article

  • A rocket following the tracks height. Not Homing Missile.

    - by confusedEj
    What I am trying to create is a rocket that will hug the track in a straight direction. ie) The rocket travels in a straight direction and can orientate based on its local x axis. This is so it can go up/down ramps and never hit the ground. Currently I am using PhysX opengl and C++. This is the method I'm trying right now: 1. Ray cast from ahead of the missile (ray casting downwards) 2. If the ray cast is less then the expected ray cast length, then I have to orientate up. 3. If the ray cast is more then the expected ray cast length, then I have to orientate down. Now the problem, I am having is that my missile is orientating at an arbitary angle (I'm giving it 1 degrees.) Though I think this is a bad approach because the amount of frames in the game is not as much as I would think there would be. So the rocket would run into a ramp. My main question is: is there a better way of approaching this and how? NxVec3 frontRayLoc = m_rocketConfig->getValueForKey<NxVec3>("r_frontRayCastLocation"); float threshhold = m_rocketConfig->getValueForKey<float>("r_angleThreshhold"); float predRayCastHeight = m_rocketConfig->getValueForKey<float>("r_predRayCastHeight"); NxVec3 rayGlobalPos_1 = m_actor->getGlobalPosition() + m_actor->getGlobalOrientation() * frontRayLoc; NxVec3 dir = m_actor->getGlobalOrientation() * NxVec3(0,-1.0,0); NxReal dist1 = castRay(rayGlobalPos_1, dir); // Get the percentage difference float actualFrontHeight = abs(1 - (dist1/predRayCastHeight)); // See if the percentage difference is greater then threshold // Also check if we are being shot off track if ((actualFrontHeight > threshhold) && (dist1 != m_rayMaxDist)){ // Dip Down if (dist1 > predRayCastHeight){ printf("DOWN - Distance 1: %f\n", dist1); // Get axis of rotation NxVec3 newAxis = m_actor->getGlobalOrientation() * NxVec3(1.0,0,0.0); // Rotate based on that axis m_orientateAngle = -1.0 * m_orientateAngle; // For rotating clockwise NxQuat newOrientation(m_orientateAngle, newAxis); NxMat33 orientation(newOrientation); m_orientation = m_orientation * orientation; // Orientate the linear velocity to keep speed of rocket and direct away from road NxVec3 linVel = m_actor->getLinearVelocity(); m_actor->setLinearVelocity(m_orientation * linVel); } // Go Up else if (dist1 < predRayCastHeight){ printf("UP - Distance 1: %f\n", dist1); // Get axis of rotation NxVec3 newAxis = m_actor->getGlobalOrientation() * NxVec3(1.0,0,0.0); // Rotate around axis NxQuat newOrientation(m_orientateAngle, newAxis); m_actor->setGlobalOrientationQuat(newOrientation); NxMat33 orientation(newOrientation); m_orientation = m_orientation * orientation; // Orientate the linear velocity to keep speed of rocket and direct away from road NxVec3 linVel = m_actor->getLinearVelocity(); m_actor->setLinearVelocity(m_orientation*linVel); } m_actor->setGlobalOrientation(m_orientation); } Thanks for the support :)

    Read the article

  • Building an external list while filtering in LINQ

    - by Khnle
    I have an array of input strings that contains either email addresses or account names in the form of domain\account. I would like to build a List of string that contains only email addresses. If an element in the input array is of the form domain\account, I will perform a lookup in the dictionary. If the key is found in the dictionary, that value is the email address. If not found, that won't get added to the result list. The code below will makes the above description clear: private bool where(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return true; } else { try { string value = dict[input]; return true; } catch (KeyNotFoundException) { return false; } } } private string select(string input, Dictionary<string, string> dict) { if (input.Contains("@")) { return input; } else { try { string value = dict[input]; return value; } catch (KeyNotFoundException) { return null; } } } public void run() { Dictionary<string, string> dict = new Dictionary<string, string>() { { "gmail\\nameless", "[email protected]"} }; string[] s = { "[email protected]", "gmail\\nameless", "gmail\\unknown" }; var q = s.Where(p => where(p, dict)).Select(p => select(p, dict)); List<string> resultList = q.ToList<string>(); } While the above code works (hope I don't have any typo here), there are 2 problems that I do not like with the above: The code in where() and select() seems to be redundant/repeating. It takes 2 passes. The second pass converts from the query expression to List. So I would like to add to the List resultList directly in the where() method. It seems like I should be able to do so. Here's the code: private bool where(string input, Dictionary<string, string> dict, List<string> resultList) { if (input.Contains("@")) { resultList.Add(input); //note the difference from above return true; } else { try { string value = dict[input]; resultList.Add(value); //note the difference from above return true; } catch (KeyNotFoundException) { return false; } } } The my LINQ expression can be nicely in 1 single statement: List<string> resultList = new List<string>(); s.Where(p => where(p, dict, resultList)); Or var q = s.Where(p => where(p, dict, resultList)); //do nothing with q afterward Which seems like perfect and legal C# LINQ. The result: sometime it works and sometime it doesn't. So why doesn't my code work reliably and how can I make it do so?

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

  • Use DivX settings to encode to mp4 with ffmpeg

    - by sjngm
    I'm used to use VirtualDub to encode a video to AVI container with DivX-codec (and MP3 for audio). Now I'm planning to use ffmpeg to encode videos to MP4 container with h264-codec. What I've figured out is that I need to use libx264 and one of those presets to make anything work. However, I'm amazed about the video bitrate ffmpeg uses for encoding. What I currently have is this little batch file: @ECHO OFF SETLOCAL SET IN=source.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-fpre "%FFMPEG_PATH%\presets\libx264-lossless_slow.ffpreset" SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %VIDEO% %PRESET% test.mp4 ENDLOCAL With this I tell ffmpeg to use 1978k as the bitrate, but ffmpeg uses 15000k+! I tried other presets, but they don't use my specified bitrate. Here are the presets I have: libx264-baseline.ffpreset libx264-ipod320.ffpreset libx264-ipod640.ffpreset libx264-lossless_fast.ffpreset libx264-lossless_max.ffpreset libx264-lossless_medium.ffpreset libx264-lossless_slow.ffpreset libx264-lossless_slower.ffpreset libx264-lossless_ultrafast.ffpreset ffmpeg version: FFmpeg git-N-29181-ga304071 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 79. 0 / 1. 79. 0 libswscale 0. 13. 0 / 0. 13. 0 Note that I don't use the latest version as it has problems with spaces in filenames. Here's what seems to be the full parameter list DivX 6.9.2 uses: -bvnn 1978000 -vbv 218691200,100663296,100663296 -dir "C:\Users\sjngm\AppData\Roaming\DivX\DivX Codec" -w -b 1 -use_presets=1 -preset=10 -windowed_fullsearch=2 -thread_delay=1 What command line parameters would that be for ffmpeg? EDIT: Going with slhck's suggestion I tried a new 32-bit version. I have no idea if that is 0.9 or newer, I can't find that info. ffmpeg version N-36890-g67f5650 libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 I reworked my batch file to look like this (interestingly enough I can't find parameter -vprofile in the documentation): @ECHO OFF SETLOCAL SET IN=VTS_01_1.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-vprofile high -preset veryslow SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %PRESET% %VIDEO% test.mp4 ENDLOCAL I see that it now uses the bitrate properly (thanks to LongNeckbeard for pointing out that the lossless-stuff ignores the bitrate!). Just in case you wonder how I came up with the 1978000, I'm using this formula which I found valid for DivX-files (I'm guessing the bitrate won't change that much for h264): width * height * 25 * 0.22 / 1000 I'm not sure if the 0.22 correlates with the CRF somehow. Overall I forgot to say the I will use a two-pass scenario, which is why I don't use the CRF here. I will try to read more about this. Currently I'm just trying to get something running that shows me that I'm doing something right (ffmpeg isn't the easiest tool to understand ;)). C:\Program Files (x86)\ffmpeg\ffmpeg.exe" -i VTS_01_1.avs -acodec libmp3lame -ab 128000 -vcodec libx264 -vb 1978000 -vprofile high -preset veryslow test.mp4 The output is now: ffmpeg version N-36890-g67f5650 Copyright (c) 2000-2012 the FFmpeg developers built on Jan 16 2012 21:57:13 with gcc 4.6.2 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 Input #0, avs, from 'VTS_01_1.avs': Duration: 00:58:46.12, start: 0.000000, bitrate: 0 kb/s Stream #0:0: Video: rawvideo (YV12 / 0x32315659), yuv420p, 576x448, 77414 kb/s, 25 tbr, 25 tbn, 25 tbc Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s File 'test.mp4' already exists. Overwrite ? [y/N] y w:576 h:448 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param: [libx264 @ 05A2C400] using cpu capabilities: MMX2 SSE2Fast FastShuffle SSEMisalign LZCNT [libx264 @ 05A2C400] profile High, level 3.1 [libx264 @ 05A2C400] 264 - core 120 r2120 0c7dab9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=abr mbtree=1 bitrate=1978 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'test.mp4': Metadata: encoder : Lavf53.30.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 576x448, q=-1--1, 1978 kb/s, 25 tbn, 25 tbc Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 48000 Hz, 2 channels, s16, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #0:1 -> #0:1 (pcm_s16le -> libmp3lame) Press [q] to stop, [?] for help frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 3 fps= 1 q=22.0 size= 39kB time=00:00:00.04 bitrate=8063.8kbits/ frame= 8 fps= 2 q=22.0 size= 82kB time=00:00:00.24 bitrate=2801.3kbits/ frame= 13 fps= 3 q=23.0 size= 120kB time=00:00:00.44 bitrate=2229.5kbits/ frame= 16 fps= 4 q=23.0 size= 147kB time=00:00:00.56 bitrate=2156.7kbits/ frame= 20 fps= 4 q=22.0 size= 175kB time=00:00:00.72 bitrate=1987.4kbits/ : video:4387kB audio:273kB global headers:0kB muxing overhead 0.260038% [libx264 @ 05A2C400] frame I:2 Avg QP:19.53 size: 29850 [libx264 @ 05A2C400] frame P:76 Avg QP:22.24 size: 19541 [libx264 @ 05A2C400] frame B:359 Avg QP:25.93 size: 8210 [libx264 @ 05A2C400] consecutive B-frames: 0.5% 0.5% 0.0% 8.2% 17.2% 52.2% 16.0% 5.5% 0.0% [libx264 @ 05A2C400] mb I I16..4: 5.4% 75.3% 19.3% [libx264 @ 05A2C400] mb P I16..4: 1.3% 16.5% 2.2% P16..4: 36.3% 28.6% 12.7% 1.8% 0.2% skip: 0.4% [libx264 @ 05A2C400] mb B I16..4: 0.4% 3.8% 0.3% B16..8: 40.0% 18.4% 4.7% direct:18.5% skip:13.9% L0:45.4% L1:38.1% BI:16.5% [libx264 @ 05A2C400] final ratefactor: 20.35 [libx264 @ 05A2C400] 8x8 transform intra:83.1% inter:68.5% [libx264 @ 05A2C400] direct mvs spatial:99.2% temporal:0.8% [libx264 @ 05A2C400] coded y,uvDC,uvAC intra: 64.9% 83.4% 49.2% inter: 49.0% 50.4% 4.4% [libx264 @ 05A2C400] i16 v,h,dc,p: 25% 22% 27% 26% [libx264 @ 05A2C400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 7% 23% 9% 10% 10% 10%10% 13% [libx264 @ 05A2C400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 13% 9% 12% 11% 10% 9% 12% [libx264 @ 05A2C400] i8c dc,h,v,p: 42% 28% 16% 14% [libx264 @ 05A2C400] Weighted P-Frames: Y:18.4% UV:7.9% [libx264 @ 05A2C400] ref P L0: 29.1% 11.3% 15.7% 7.3% 6.9% 4.9% 5.1% 3.4%3.9% 2.7% 2.8% 1.8% 1.7% 1.2% 1.4% 0.9% [libx264 @ 05A2C400] ref B L0: 68.8% 11.4% 5.5% 2.9% 2.3% 1.9% 1.5% 1.1%1.1% 1.0% 0.9% 0.7% 0.5% 0.3% 0.1% [libx264 @ 05A2C400] ref B L1: 91.9% 8.1% [libx264 @ 05A2C400] kb/s:2055.88 As far as I'm concerned it doesn't look that bad to me.

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Where would you document standardized complex data that is passed between many objects and methods?

    - by Eli
    Hi All, I often find myself with fairly complex data that represents something that my objects will be working on. For example, in a task-list app, several objects might work with an array of tasks, each of which has attributes, temporal expressions, sub tasks and sub sub tasks, etc. One object will collect data from web forms, standardize it into a format consumable by the class that will save them to the database, another object will pull them from the database, put them in the standard format and pass them to the display object, or the update object, etc. The data itself can become a fairly complex series of arrays and sub arrays, representing a 'task' or list of tasks. For example, the below might be one entry in a task list, in the format that is consumable by the various objects that will work on it. Normally, I just document this in a file somewhere with an example. However, I am thinking about the best way to add it to something like PHPDoc, or another standard doc system. Where would you document your consumable data formats that are for many or all of the objects / methods in your app? Array ( [Meta] => Array ( //etc. ) [Sched] => Array ( [SchedID] => 32 [OwnerID] => 2 [StatusID] => 1 [DateFirstTask] => 2011-02-28 [DateLastTask] => [MarginMonths] => 3 ) [TemporalExpressions] => Array ( [0] => Array ( [type] => dw [TemporalExpID] => 3 [ord] => 2 [day] => 6 [month] => 4 ) [1] => Array ( [type] => dm [TemporalExpID] => 32 [day] => 28 [month] => 2 ) ) [Task] => Array ( [SchedTaskID] => 32 [SchedID] => 32 [OwnerID] => 2 [UserID] => 5 [ClientID] => 9 [Title] => Close Prior Year [Body] => [DueTime] => ) [SubTasks] => Array ( [101] => Array ( [SchedSubTaskID] => 101 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Profit and Loss by Class [Body] => [DueDiff] => 0 ) [102] => Array ( [SchedSubTaskID] => 102 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Balance Sheet [Body] => [DueDiff] => 0 ) [103] => Array ( [SchedSubTaskID] => 103 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Current Year for Prior Year Expenses to Accrue [Body] => Look at Journal Entries that are templates as well. [DueDiff] => 0 ) [104] => Array ( [SchedSubTaskID] => 104 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Prior Year Membership from 11/1 - 12/31 to Accrue to Current Year [Body] => [DueDiff] => 0 ) [105] => Array ( [SchedSubTaskID] => 105 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Enter Vacation Accrual [Body] => [DueDiff] => 0 ) [106] => Array ( [SchedSubTaskID] => 106 [ParentST] => 105 [RootT] => 32 [UserID] => 2 [Title] => Email Peter requesting Vacation Status of Employees at Year End [Body] => We need Employee Name, Rate and Days of Vacation left to use. We also need to know if the employee used any of the prior year's vacation. [DueDiff] => 43 ) [107] => Array ( [SchedSubTaskID] => 107 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Grants Receivable at Year End [Body] => [DueDiff] => 0 ) [108] => Array ( [SchedSubTaskID] => 108 [ParentST] => 107 [RootT] => 32 [UserID] => 2 [Title] => Email Peter Requesting if there were and Grants Receivable at year end [Body] => [DueDiff] => 43 ) ) )

    Read the article

  • CodePlex Daily Summary for Friday, June 17, 2011

    CodePlex Daily Summary for Friday, June 17, 2011Popular ReleasesPowerGUI Visual Studio Extension: PowerGUI VSX 1.3.5: Changes - VS SDK no longer required to be installed (a bug in v. 1.3.4).EnhSim: EnhSim 2.4.7 BETA: 2.4.7 BETAThis release supports WoW patch 4.1 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated the Stormst...Media Companion: MC 3.407b Weekly: Important! A movie rebuild & restart is required after installing this update. Fixes Movie Actor cache path settings should now be created correctly after rescrape or recrape specific TV 'The' will be moved to the end of TVShow titles when renaming if the 'ignore article' preference is enabled Improved the TVDB log details for the stream type returned by TVDB Right Clicking on a TVShow now gives the option to display episodes in a new window in aired date order - ideal to see where special...Gendering Add-In for Microsoft Office Word 2010: Gendering Add-In: This is the first stable Version of the Gendering Add-In. Unzip the package and start "setup.exe". The .reg file shows how to config an alternate path for suggestion table.TerrariViewer: TerrariViewer v3.1 [Terraria Inventory Editor]: This version adds tool tips. Almost every picture box you mouse over will tell you what item is in that box. I have also cleaned up the GUI a little more to make things easier on my end. There are various bug fixes including ones associated with opening different characters in the same instance of the program. As always, please bring any bugs you find to my attention.Kinect Paint: KinectPaint V1.0: This is version 1.0 of Kinect Paint. To run it, follow the steps: Install the Kinect SDK for Windows (available at http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/download.aspx) Connect your Kinect device to the computer and to the power. Download the Zip file. Unblock the Zip file by right clicking on it, and pressing the Unblock button in the file properties (if available). Extract the content of the Zip file. Run KinectPaint.exe.CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Beta: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6583 6737DropBox Linker: DropBox Linker 1.2: Public sub-folders are now monitored for changes as well (thanks to mcm69) Automatic public sync folder detection (thanks to mcm69) Non-Latin and special characters encoded correctly in URLs Pop-ups are now slot-based (use first free slot and will never be overlapped — test it while previewing timeout) Public sync folder setting is hidden when auto-detected Timeout interval is displayed in popup previews A lot of major and minor code refactoring performed .NET Framework 4.0 Client...Terraria World Viewer: Version 1.3: Update June 15th Removed "Draw Markers" checkbox from main window because of redundancy/confusing. (Select all or no items from the Settings tab for the same effect.) Fixed Marker preferences not being saved. It is now possible to render more than one map without having to restart the application. World file will not be locked while the world is being rendered. Note: The World Viewer might render an inaccurate map or even crash if Terraria decides to modify the World file during the pro...MVC Controls Toolkit: Mvc Controls Toolkit 1.1.5 RC: Added Extended Dropdown allows a prompt item to be inserted as first element. RequiredAttribute, if present, trggers if no element is chosen Client side javascript function to set/get the values of DateTimeInput, TypedTextBox, TypedEditDisplay, and to bind/unbind a "change" handler The selected page in the pager is applied the attribute selected-page="selected" that can be used in the definition of CSS rules to style the selected page items controls now interpret a null value as an empr...Umbraco CMS: Umbraco CMS 5.0 CTP 1: Umbraco 5 Community Technology Preview Umbraco 5 will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out our first CTP of version 5 today! If you're new to Umbraco and would like to get a quick low-down on our popular and easy-to-learn approach to content management, check out our intro video here. What's in the v5 CTP box? This is a preview version of version 5 and includes support for the following familiar Umbr...Coding4Fun Kinect Toolkit: Coding4Fun.Kinect Toolkit: Version 1.0Kinect Mouse Cursor: Kinect Mouse Cursor v1.0: The initial release of the Kinect Mouse Cursor project!patterns & practices: Project Silk: Project Silk Community Drop 11 - June 14, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Client Data Management and Caching" chapter. Updated "Application Notifications" chapter. Updated "Architecture" chapter. Updated "jQuery UI Widget" chapter. Updated "Widget QuickStart" appendix and code. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separat...Orchard Project: Orchard 1.2: Build: 1.2.41 Published: 6/14/2010 How to Install Orchard To install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx. Web PI will detect your hardware environment and install the application. Alternatively, to install the release manually, download the Orchard.Web.1.2.41.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run. Simply extract the contents o...Snippet Designer: Snippet Designer 1.4.0: Snippet Designer 1.4.0 for Visual Studio 2010 Change logSnippet Explorer ChangesReworked language filter UI to work better in the side bar. Added result count drop down which lets you choose how many results to see. Language filter and result count choices are persisted after Visual Studio is closed. Added file name to search criteria. Search is now case insensitive. Snippet Editor Changes Snippet Editor ChangesAdded menu option for the $end$ symbol which indicates where the c...Mobile Device Detection and Redirection: 1.0.4.1: Stable Release 51 Degrees.mobi Foundation is the best way to detect and redirect mobile devices and their capabilities on ASP.NET and is being used on thousands of websites worldwide. We’re highly confident in our software and we recommend all users update to this version. Changes to Version 1.0.4.1Changed the BlackberryHandler and BlackberryVersion6Handler to have equal CONFIDENCE values to ensure they both get a chance at detecting BlackBerry version 4&5 and version 6 devices. Prior to thi...Rawr: Rawr 4.1.06: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta6: ??AcDown?????????????,?????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta6 ?????(imanhua.com)????? ???? ?? ??"????","?????","?????","????"?????? "????"?????"????????"?? ??????????? ?????????????? ?????????????/???? ?? ????Windows 7???????????? ????????? ?? ????????????? ???????/??????????? ???????????? ?? ?? ?????(imanh...Pulse: Pulse Beta 2: - Added new wallpapers provider http://wallbase.cc. Supports english search, multiple keywords* - Improved font rendering in Options window - Added "Set wallpaper as logon background" option* - Fixed crashes if there is no internet connection - Fixed: Rewalls downloads empty images sometimes - Added filters* Note 1: wallbase provider supports only english search. Rewalls provider supports only russian search but Pulse automatically translates your english keyword into russian using Google Tr...New ProjectsASP.NET Menu Extender for Unordered List ( UL ) HTML: The MenuExtender is an ASP.NET extender which will turn hierarchial HTML lists (rendered with UL and LI) into a multi-level dynamic cascading menu. Azure Proxy Handler: This project helps you to provide access onto your azure development application throught direct address like http://azureproxy.com (without port number). Even your application on unknown port. Also this application allows use HTTPS protocol.BD Simple Status: BD Simple Status is a straight-forward server up/down indication website. Cmic Reader: Cmic Reader is a reader allows you to download txt files form SkyDrive and read the txt files on windows phone 7. It support to read txt files in English and east Asia languages for example Chinese or Japanese in UTF 8 encoding. Coding4Fun Kinect Toolkit: kinect!EeRePeSIA: Esta si funciona....EIGHT.one - Tile-Based Browser Start Page: Windows-8-inspired, tile-based browser start pageEntityGraph: EntityGraph is a technology to separate generic structural operations from your data model. Examples are: copying/cloning, querying, and data validation. These operations can be applied to parts of your data model as defined by graph shapes. EventController: A control an event serviceInterceptor: Interceptor is an open source library used to intercept managed methods. The library can be used whenever an hook of a managed method is necessary. Currently Interceptor works only on 32 bit machine.Interval Tree: Interval Tree implementation for C# This is the data structure you need to store a lot of temporal data and retrieve it super fast by point in time or time range.Kinect Mouse Cursor: Kinect Mouse Cursor is a demo application that uses the Kinect for Windows SDK and its skeletal tracking features to allow a user to use their hands to control the Windows mouse cursor.Kinect Paint: Kinect Paint is a skeleton tracking application that allows you to become the paint brush!Learning: get more fun. :)lendlesscms: lendlesscmslendlesstools: lendlesstoolsLittle Wiki Plugins: Little wiki plugins is a small project for developing plugins for two popular wikis i.e. Dokuwiki and Tiddlywiki Mapas do Google: Estudando a API do Google Maps.MetroBackUp: MetroBackUp is an application for synchronization of directories. Thereby several pairs of directories, that shall be synchronized, can be grouped in jobs and updated together.mvcblogengine: ??ASP.NET MVC?BlogEngine???????????。MvcPipe: Facebook Bigpipe implementation for ASP.NET MVC. Allows to execute asynchronous actions to update the visible client UI at different times, improving perceived client latency.My Task List Rollup: Here I am again with a new utility for your SharePoint deployment – “My Task List Rollup”.NIntegra: NIntegra: Continuous Integration ServerPlusForum MSDN: A happy way to browse MSDN ForumsPopup Multi-Time Zone Clock: MultiClock provides a display of multiple analog clocks in various time zones. The application hides along the right side of the user's screen, and appears on mouse-over. It's developed in C# and uses components from http://www.codeproject.com/KB/miscctrl/yaac.aspxProject Jellybean, a kinect drivable lounge chair: Jellybean is a kinect drivable lounge chairRoboPowerCopy: A PowerShell "clone" of the famous "ROBOCOPY" tool. In fact I've adapted the funcionality of "robocopy" in PowerShell to provide you and myself a "robust file copy" tool.Silverlight TreeGrid: Silverlight TreeGrid is a component, that inherits DataGrid and adds the ability to display hierarchical data.simplemock-dot-net: SimpleMock is a mocking framework that is designed to be lightweight, simplistic, minimalistic, strongly typed and generally very easy to use via a Fluent API.SQL Server Replication Explorer: SQL Server Replication Explorer is a client tool for browsing through Microsoft SQL Server replication topology. It can also be used for troubleshooting and monitoring of the Microsoft SQL Server replication. System.Media.SoundPlayer example with ThreadPool: Play audio file (.wav, .mp3) with SoundPlayer class. I have already wrap it with ThreadPool, won't freeze your GUI. Grab it and see how .Net play an audio file.The Dot Net Download Manager: A fast and user friendly download manager that aims to keep it simple. The aim is to provide all the important features of commercial download managers while keeping it simple, functional and user friendly. The application is written in C# WPFTibiaPingFixer: TibiaPingFixer will reduce your online gaming latency significantly by increasing the frequency of TCP acknowledgements sent to the game server. For the technically minded, this is a script which will modify TCPAckFrequency. TimeOffManager: Time Off ManagerWeatherUS: WeatherUS

    Read the article

  • 12c - flashforward, flashback or see it as of now...

    - by noreply(at)blogger.com (Thomas Kyte)
    Oracle 9i exposed flashback query to developers for the first time.  The ability to flashback query dates back to version 4 however (it just wasn't exposed).  Every time you run a query in Oracle it is in fact a flashback query - it is what multi-versioning is all about.However, there was never a flashforward query (well, ok, the workspace manager has this capability - but with lots of extra baggage).  We've never been able to ask a table "what will you look like tomorrow" - but now we do.The capability is called Temporal Validity.  If you have a table with data that is effective dated - has a "start date" and "end date" column in it - we can now query it using flashback query like syntax.  The twist is - the date we "flashback" to can be in the future.  It works by rewriting the query to transparently the necessary where clause and filter out the right rows for the right period of time - and since you can have records whose start date is in the future - you can query a table and see what it would look like at some future time.Here is a quick example, we'll start with a table:ops$tkyte%ORA12CR1> create table addresses  2  ( empno       number,  3    addr_data   varchar2(30),  4    start_date  date,  5    end_date    date,  6    period for valid(start_date,end_date)  7  )  8  /Table created.the new bit is on line 6 (it can be altered into an existing table - so any table  you have with a start/end date column will be a candidate).  The keyword is PERIOD, valid is an identifier I chose - it could have been foobar, valid just sounds nice in the query later.  You identify the columns in your table - or we can create them for you if they don't exist.  Then you just create some data:ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '123 Main Street', trunc(sysdate-5), trunc(sysdate-2) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '456 Fleet Street', trunc(sysdate-1), trunc(sysdate+1) );1 row created.ops$tkyte%ORA12CR1>ops$tkyte%ORA12CR1> insert into addresses (empno, addr_data, start_date, end_date )  2  values ( 1234, '789 1st Ave', trunc(sysdate+2), null );1 row created.and you can either see all of the data:ops$tkyte%ORA12CR1> select * from addresses;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13      1234 456 Fleet Street               01-JUL-13 03-JUL-13      1234 789 1st Ave                    04-JUL-13or query "as of" some point in time - as  you can see in the predicate section - it is just doing a query rewrite to automate the "where" filters:ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate-3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 123 Main Street                27-JUN-13 30-JUN-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  cthtvvm0dxvva, child number 0-------------------------------------select * from addresses as of period for valid sysdate-3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!-3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!-3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 456 Fleet Street               01-JUL-13 03-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  26ubyhw9hgk7z, child number 0-------------------------------------select * from addresses as of period for valid sysdatePlan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.ops$tkyte%ORA12CR1> select * from addresses as of period for valid sysdate+3;     EMPNO ADDR_DATA                      START_DAT END_DATE---------- ------------------------------ --------- ---------      1234 789 1st Ave                    04-JUL-13ops$tkyte%ORA12CR1> @planops$tkyte%ORA12CR1> select * from table(dbms_xplan.display_cursor);PLAN_TABLE_OUTPUT-------------------------------------------------------------------------------SQL_ID  36bq7shnhc888, child number 0-------------------------------------select * from addresses as of period for valid sysdate+3Plan hash value: 3184888728-------------------------------------------------------------------------------| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |-------------------------------------------------------------------------------|   0 | SELECT STATEMENT  |           |       |       |     3 (100)|          ||*  1 |  TABLE ACCESS FULL| ADDRESSES |     1 |    48 |     3   (0)| 00:00:01 |-------------------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------   1 - filter((("T"."START_DATE" IS NULL OR              "T"."START_DATE"<=SYSDATE@!+3) AND ("T"."END_DATE" IS NULL OR              "T"."END_DATE">SYSDATE@!+3)))Note-----   - dynamic statistics used: dynamic sampling (level=2)24 rows selected.All in all a nice, easy way to query effective dated information as of a point in time without a complex where clause.  You need to maintain the data - it isn't that a delete will turn into an update the end dates a record or anything - but if you have tables with start/end dates, this will make it much easier to query them.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Using VLOOKUP in Excel

    - by Mark Virtue
    VLOOKUP is one of Excel’s most useful functions, and it’s also one of the least understood.  In this article, we demystify VLOOKUP by way of a real-life example.  We’ll create a usable Invoice Template for a fictitious company. So what is VLOOKUP?  Well, of course it’s an Excel function.  This article will assume that the reader already has a passing understanding of Excel functions, and can use basic functions such as SUM, AVERAGE, and TODAY.  In its most common usage, VLOOKUP is a database function, meaning that it works with database tables – or more simply, lists of things in an Excel worksheet.  What sort of things?   Well, any sort of thing.  You may have a worksheet that contains a list of employees, or products, or customers, or CDs in your CD collection, or stars in the night sky.  It doesn’t really matter. Here’s an example of a list, or database.  In this case it’s a list of products that our fictitious company sells: Usually lists like this have some sort of unique identifier for each item in the list.  In this case, the unique identifier is in the “Item Code” column.  Note:  For the VLOOKUP function to work with a database/list, that list must have a column containing the unique identifier (or “key”, or “ID”), and that column must be the first column in the table.  Our sample database above satisfies this criterion. The hardest part of using VLOOKUP is understanding exactly what it’s for.  So let’s see if we can get that clear first: VLOOKUP retrieves information from a database/list based on a supplied instance of the unique identifier. Put another way, if you put the VLOOKUP function into a cell and pass it one of the unique identifiers from your database, it will return you one of the pieces of information associated with that unique identifier.  In the example above, you would pass VLOOKUP an item code, and it would return to you either the corresponding item’s description, its price, or its availability (its “In stock” quantity).  Which of these pieces of information will it pass you back?  Well, you get to decide this when you’re creating the formula. If all you need is one piece of information from the database, it would be a lot of trouble to go to to construct a formula with a VLOOKUP function in it.  Typically you would use this sort of functionality in a reusable spreadsheet, such as a template.  Each time someone enters a valid item code, the system would retrieve all the necessary information about the corresponding item. Let’s create an example of this:  An Invoice Template that we can reuse over and over in our fictitious company. First we start Excel… …and we create ourselves a blank invoice: This is how it’s going to work:  The person using the invoice template will fill in a series of item codes in column “A”, and the system will retrieve each item’s description and price, which will be used to calculate the line total for each item (assuming we enter a valid quantity). For the purposes of keeping this example simple, we will locate the product database on a separate sheet in the same workbook: In reality, it’s more likely that the product database would be located in a separate workbook.  It makes little difference to the VLOOKUP function, which doesn’t really care if the database is located on the same sheet, a different sheet, or a completely different workbook. In order to test the VLOOKUP formula we’re about to write, we first enter a valid item code into cell A11: Next, we move the active cell to the cell in which we want information retrieved from the database by VLOOKUP to be stored.  Interestingly, this is the step that most people get wrong.  To explain further:  We are about to create a VLOOKUP formula that will retrieve the description that corresponds to the item code in cell A11.  Where do we want this description put when we get it?  In cell B11, of course.  So that’s where we write the VLOOKUP formula – in cell B11. Select cell B11: We need to locate the list of all available functions that Excel has to offer, so that we can choose VLOOKUP and get some assistance in completing the formula.  This is found by first clicking the Formulas tab, and then clicking Insert Function:   A box appears that allows us to select any of the functions available in Excel.  To find the one we’re looking for, we could type a search term like “lookup” (because the function we’re interested in is a lookup function).  The system would return us a list of all lookup-related functions in Excel.  VLOOKUP is the second one in the list.  Select it an click OK… The Function Arguments box appears, prompting us for all the arguments (or parameters) needed in order to complete the VLOOKUP function.  You can think of this box as the function is asking us the following questions: What unique identifier are you looking up in the database? Where is the database? Which piece of information from the database, associated with the unique identifier, do you wish to have retrieved for you? The first three arguments are shown in bold, indicating that they are mandatory arguments (the VLOOKUP function is incomplete without them and will not return a valid value).  The fourth argument is not bold, meaning that it’s optional:   We will complete the arguments in order, top to bottom. The first argument we need to complete is the Lookup_value argument.  The function needs us to tell it where to find the unique identifier (the item code in this case) that it should be retuning the description of.  We must select the item code we entered earlier (in A11). Click on the selector icon to the right of the first argument: Then click once on the cell containing the item code (A11), and press Enter: The value of “A11” is inserted into the first argument. Now we need to enter a value for the Table_array argument.  In other words, we need to tell VLOOKUP where to find the database/list.  Click on the selector icon next to the second argument: Now locate the database/list and select the entire list – not including the header line.  The database is located on a separate worksheet, so we first click on that worksheet tab: Next we select the entire database, not including the header line: …and press Enter.  The range of cells that represents the database (in this case “’Product Database’!A2:D7”) is entered automatically for us into the second argument. Now we need to enter the third argument, Col_index_num.  We use this argument to specify to VLOOKUP which piece of information from the database, associate with our item code in A11, we wish to have returned to us.  In this particular example, we wish to have the item’s description returned to us.  If you look on the database worksheet, you’ll notice that the “Description” column is the second column in the database.  This means that we must enter a value of “2” into the Col_index_num box: It is important to note that that we are not entering a “2” here because the “Description” column is in the B column on that worksheet.  If the database happened to start in column K of the worksheet, we would still enter a “2” in this field. Finally, we need to decide whether to enter a value into the final VLOOKUP argument, Range_lookup.  This argument requires either a true or false value, or it should be left blank.  When using VLOOKUP with databases (as is true 90% of the time), then the way to decide what to put in this argument can be thought of as follows: If the first column of the database (the column that contains the unique identifiers) is sorted alphabetically/numerically in ascending order, then it’s possible to enter a value of true into this argument, or leave it blank. If the first column of the database is not sorted, or it’s sorted in descending order, then you must enter a value of false into this argument As the first column of our database is not sorted, we enter false into this argument: That’s it!  We’ve entered all the information required for VLOOKUP to return the value we need.  Click the OK button and notice that the description corresponding to item code “R99245” has been correctly entered into cell B11: The formula that was created for us looks like this: If we enter a different item code into cell A11, we will begin to see the power of the VLOOKUP function:  The description cell changes to match the new item code: We can perform a similar set of steps to get the item’s price returned into cell E11.  Note that the new formula must be created in cell E11.  The result will look like this: …and the formula will look like this: Note that the only difference between the two formulae is the third argument (Col_index_num) has changed from a “2” to a “3” (because we want data retrieved from the 3rd column in the database). If we decided to buy 2 of these items, we would enter a “2” into cell D11.  We would then enter a simple formula into cell F11 to get the line total: =D11*E11 …which looks like this… Completing the Invoice Template We’ve learned a lot about VLOOKUP so far.  In fact, we’ve learned all we’re going to learn in this article.  It’s important to note that VLOOKUP can be used in other circumstances besides databases.  This is less common, and may be covered in future How-To Geek articles. Our invoice template is not yet complete.  In order to complete it, we would do the following: We would remove the sample item code from cell A11 and the “2” from cell D11.  This will cause our newly created VLOOKUP formulae to display error messages: We can remedy this by judicious use of Excel’s IF() and ISBLANK() functions.  We change our formula from this…       =VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) We would copy the formulas in cells B11, E11 and F11 down to the remainder of the item rows of the invoice.  Note that if we do this, the resulting formulas will no longer correctly refer to the database table.  We could fix this by changing the cell references for the database to absolute cell references.  Alternatively – and even better – we could create a range name for the entire product database (such as “Products”), and use this range name instead of the cell references.  The formula would change from this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,Products,2,FALSE)) …and then copy the formulas down to the rest of the invoice item rows. We would probably “lock” the cells that contain our formulae (or rather unlock the other cells), and then protect the worksheet, in order to ensure that our carefully constructed formulae are not accidentally overwritten when someone comes to fill in the invoice. We would save the file as a template, so that it could be reused by everyone in our company If we were feeling really clever, we would create a database of all our customers in another worksheet, and then use the customer ID entered in cell F5 to automatically fill in the customer’s name and address in cells B6, B7 and B8. If you would like to practice with VLOOKUP, or simply see our resulting Invoice Template, it can be downloaded from here. Similar Articles Productive Geek Tips Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 FormatImport Microsoft Access Data Into ExcelChange the Default Font in Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • Active and Passive Federation in WIF

    - by user102533
    I am trying to understand the difference between Active and Passive federation in WIF. It appears that one would use an Active Federation if the Relying Party (RP) is a WCF Service instead of an ASP.NET application and a Passive Federation if the RP is an ASP.NET application. Is this accurate? So, in a scenario in which an ASP.NET application uses a WCF in the backend, the MS articles suggest using a 'bootstrap' security token that is obtained by the ASP.NET app using an ActAs STS and this token is used to authenticate with the WCF. In this scenario, it appears that we are doing a combination of Active (user - STS - ASP.NET RP) and Passive (ASP.NET - ActAs STS - WCF) Federation?

    Read the article

  • What is the differnce between DataTemplate and DataContext in WPF?

    - by Ashish Ashu
    I can set the relationship b/w View Model and view through following DataContext syntax: <UserControl.DataContext> <view_model:MainMenuModel /> </UserControl.DataContext> And I can also set the relationship b/w View Model and view through following DataTemplate syntax: <DataTemplate DataType="{x:Type viewModel:UserViewModel}"> <view:UserView /> </DataTemplate> Please let me know what is the difference between the two ? Is the second XAML does not set the data context of a view ?

    Read the article

  • "There were build errors. Would you like to continue and run the last successful build?" not showing

    - by Kevin Wilson
    Hi, Not a massive problem but something that has been bugging the life out of me... One of my colleagues was trying out some code on my machine and got the "There were build errors. Would you like to continue and run the last successful build?" pop-up when the build failed on Visual Studio. He clicked on the "don't show this again" checkbox and closed the dialogue. My problem is that I can't get the dialogue to show up again. I found these instructions online: "Select Tools, Options, Projects and Solutions, Build and Run. Then set the option "On run, when build or deployment errors occur" to Prompt to Launch." but that doesn't work. Resetting the IDE settings to default don't make any difference either. Is there any way to get this dialogue back or has it gone forever? Thanks, K

    Read the article

  • Why does calling setScaleX during pinch zoom gesture cause flicker?

    - by numan
    I am trying to create a zoomable container and I am targeting API 14+ In my onScale (i am using the ScaleGestureDetector to detect pinch-zoom) I am doing something like this: public boolean onScale (ScaleGestureDetector detector) { float scaleFactor = detector.getScaleFactor(); setScaleX(getScaleX() * scaleFactor); setScaleY(getScaleY() * scaleFactor); return true; }; It works but the zoom is not smooth. In fact it noticeably flickers. I also tried it with hardware layer thinking that the scaling would happen on the GPU once the texture was uploaded and thus would be super fast. But it made no difference - the zoom is not smooth and flickers weirdly sometimes. What am I doing wrong?

    Read the article

  • Why won't OpenCV compile in NVCC?

    - by zenna
    Hi there I am trying to integrate CUDA and openCV in a project. Problem is openCV won't compile when NVCC is used, while a normal c++ project compiles just fine. This seems odd to me, as I thought NVCC passed all host code to the c/c++ compiler, in this case the visual studio compiler. The errors I get are? c:\opencv2.0\include\opencv\cxoperations.hpp(1137): error: no operator "=" matches these operands operand types are: const cv::Range = cv::Range c:\opencv2.0\include\opencv\cxoperations.hpp(2469): error: more than one instance of overloaded function "std::abs" matches the argument list: function "abs(long double)" function "abs(float)" function "abs(double)" function "abs(long)" function "abs(int)" argument types are: (ptrdiff_t) So my question is why the difference considering the same compiler (should be) is being used and secondly how I could remedy this.

    Read the article

  • iTextSharp renders image with poor quality in PDF

    - by Sebastian
    Hello, I'm using iTextSharp to print a PDF document. Everything goes ok until I have to print the company logo in it. First I noticed that the logo had poor quality, but after testing with several images, I realize that was the iTextSharp rendering it poorly. The test I did to say this was to print the PDF using my code and then edit the document with Acrobat 8.0 and I drew an image. Then printed the two documents and saw the noticeable difference. My question is that if anyone know if this can be due to a scaling problem where I'm failing to tell iTextSharp how it must render the image or is an iTextSharp limitation. The code to render the image is the following: Dim para As Paragraph = New Paragraph para.Alignment = Image.RIGHT_ALIGN para.Add(text) Dim imageFile As String = String.Format("{0}{1}", GetAppSetting("UploadDirectory"), myCompany.LogoUrl) Dim thisImage As Image = Image.GetInstance(imageFile) thisImage.Alignment = Image.LEFT_ALIGN para.Add(thisImage) The printed images are the following: Image printed directly with iTextSharp Image edited and printed with Acrobat 8 Please let me know if I need to provide more info. Thanks

    Read the article

  • netbeans 6.8 php, xdebug, mamp - cannot establish a connection

    - by Mike
    I'm having no luck whatsoever with debuggers either on Zend Studio or NetBeans. I cannot believe the difficulty I'm having with these IDE's. I've read all the prior questions on netbeans-xdebug problems. According to those answers, my configuration is correct. Nonetheless, when I try to run xdebug, NetBeans hangs. I cancel the connection and get the popup dialog: "There is no connection, blah blah blah..." Here's my php.ini [xdebug] ; Xdebug config for Mac OS X and NetBeans IDE zend_extension=/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/xdebug.so xdebug.remote_enable=on xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_host=127.0.0.1 xdebug.remote_port=9000 xdebug.idekey=netbeans-xdebug In NetBeans preferences, my session ID = netbeans-xdebug. I tried xdebug.idekey="netbeans-xdebug" That made no difference. I restarted the web server every time. Anyone have any other suggestions I might try? Thanks.

    Read the article

  • Automation testing tool for Regression testing of desktop application

    - by user285037
    Hi I am working on a desktop application which uses Infragistic grids. We need to automate the regression tests for same. QTP alone does not support this, we need to buy new plug in for same which my company is not very much interested in. Do we have any open source tool for automating regression testing of desktop application? Application is in Dot net but i do not think it makes much of a difference. Please suggests, i have zeroed in for test complete but again it is licensed one. I need some open source.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >