Search Results

Search found 20953 results on 839 pages for 'chart control'.

Page 591/839 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Deny Switching to Tabpage in a tabcontrol

    - by Maneesh
    I have a form(C#) with a tab control and its has around five tab pages. each of the tab have a few textboxes. 1) if a User is in say Tab A and edits certain fields i need to validate the text enetered if found invalid then i should not allow any tab switch ? is that possible? 2) Another case could be ... user edits some values and clicks on another tab, on doing so i need to check if the values that were enetered for Tab A is correct or not ? can i do this? I am a novice to C#... so may be these questions sound very basic any help will be appreciated. also i want to know what are these events of a tab page Leave, validated or validating ?

    Read the article

  • Desigining problem in asp.net?

    - by Surya sasidhar
    hi, in my application i am displaying a image (image size is 90px width,90px height) and below that i displaying title of the image in data list control (horizontal display). i used table (tags) in source code. the image and title will display dynamically from database. my problem is when title is too long it is disturbing my design, when title is too long it should come down, the title should same size as the picture if it is too long it come down in second line. for example the below is a image and below that image title -------------- | | | | | | | | | | |_____________| surya sasidhar rao the above image title is too long. the letters "rao" should come in second line. -------------- | | | | | | | | | | |_____________| surya sasidhar rao this is the desing i want. how can i get it. this is my source code ' CommandName="Play" CommandArgument='<%#Eval("VideoName")+"|"+Eval("videoid")+","+Eval("username")%' Width ="90px" Height ="90px" / ' CommandName="title" CommandArgument='<%#Eval("VideoName")+"|"+Eval("videoid")+","+Eval("username")%' ' '

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • DataGrid row and MVVM

    - by Titan
    Hi, I have a wpf datagrid with many rows, each row has some specific behaviors like selection changed of column 1 combo will filter column 2 combo, and what is selected in row 1 column 1 combo cannot be selected in row 2 column 1 combo, etc... So I am thinking of having a view model for the main datagrid, and another for each row. Is that a good MVVM implementation? It is so that I can handle each row's change event effectively. Question is, how do I create "each row" as a user control view? within the datagrid. I want to implement something like this: <TreeView Padding="0,4,12,0"> <controls:CommandTreeViewItem Header="Sales Orders" Command="{Binding SelectViewModelCommand}" CommandParameter="Sales Orders"/> </TreeView> Where instead of a TreeView I want a datagrid, and instead of controls:CommandTreeViewItem a datagrid row in WPF. Thanks in advance.

    Read the article

  • BlackBerry - Custom centered cyclic HorizontalFieldManager

    - by Hezi
    Trying to create a custom cyclical horizontal manager which will work as follows. It will control several field buttons where the buttons will always be positioned so that the focused button will be in the middle of the screen. As it is a cyclical manager once the focus moves to the right or left button, it will move to the center of the screen and all the buttons will move accordingly (and the last button will become the first to give it an cyclic and endless list feeling) Any idea how to address this? I tried doing this by implementing a custom manager which aligns the buttons according to the required layout. Each time moveFocus() is called I remove all fields (deleteAll() ) and add them again in the right order. Unfortunately this does not work.

    Read the article

  • How to get default Ctrl+Tab functionality in WinForms MDI app when hosting WPF UserControls

    - by jpierson
    I have a WinForms based app with traditional MDI implementation within it except that I'm hosting WPF based UserControls via the ElementHost control as the main content for each of my MDI children. This is the solution recommended by Microsoft for achieving MDI with WPF although there are various side effects unfortunately. One of which is that my Ctrl+Tab functionality for tab switching between each MDI child is gone because the tab key seems to be swallowed up by the WPF controls. Is there a simple solution to this that will let the Ctrl+tab key sequences reach my WinForms MDI parent so that I can get the built-in tab switching functionality?

    Read the article

  • Enabling Direct3D-specific features (transparency AA)

    - by Bill Kotsias
    Hello. I am trying to enable transparency antialiasing in my Ogre-Direct3D application, but it just won't work. HRESULT hres = d3dSystem->getDevice()->SetRenderState(D3DRS_ADAPTIVETESS_Y, (D3DFORMAT)MAKEFOURCC('S', 'S', 'A', 'A')); /// returned value : hres == S_OK ! This method is taken from NVidia's technical report. I can enable transparency AA manually through the NVIDIA Control Panel, but surely I can't ask my users to do it like this. Anyone has any idea? Thank you for your time, Bill

    Read the article

  • Programmatically retrieve disconnected network adapter information in .NET

    - by Soo Wei Tan
    I have an application written in C# that needs to retrieve information like IP address, subnet mask from a disconnected network adapter. I've tried using various methods such as WMI and the .NET NetworkAdapter class but they don't return any useful data when the network adapter is disconnected. I'm pretty sure Windows keeps this information somewhere, since I can apply network settings using netsh and it appears correctly in the Control Panel. One thing that worked for me in XP was to parse the output of the netsh tool and it would return information even for a disconnected adapter. However, this doesn't seem to work on Windows 7. Win XP output: Configuration for interface "Local Area Connection 5" DHCP enabled: No IP Address: 169.254.0.128 SubnetMask: 255.255.255.0 InterfaceMetric: 0 Win7 output: Configuration for interface "Local Area Connection 2" DHCP enabled: No InterfaceMetric: 5 Any ideas?

    Read the article

  • How to pass EventArgument information from view to view model in WPF?

    - by Ashish Ashu
    I have ListView control in my application which is binded to the collection of CustomObject List<CustomObject. The CustomObject has seperate view. This ListView has seperate view model. The collection List _customobject is containted in the ListView ViewModel class. My Query: I want to invoke a view that show properties of custom object, when user double click on any row of ListView. The ListView double click command is binded to the ListViewDoublClick Command in the view model. The CustomObject is in the event argument of listview double click command. To acheive this I have to pass the custom object ( or an unique id property of custom object through which I can retrieve the custom object from the collection) as command parameter. Please suggest me the solution!!

    Read the article

  • Java 64bit install throwing non compatible 64bit error in 64bit Windows 7

    - by ThunderWolf
    JRE and JDK 64bit install executable are throwing a non compatible win32 error: jre_7u1_windows-x64bit.exe is not a valid Win32 application. I thought this could be a system environment variable problem, but from what I can tell it is not, the variable PROCESSOR_ARCHITECTURE is set as: AMD64 and the variable PROCESSOR_IDENTIFIER is set as: Intel64 Family 6 Model 37 Stepping 5, GenuineIntel I am not sure what variables the installer reads from if any. I have tried java 6 installer and the same thing. I can install other programs designed for a 64bit architecture and I have looked at Control PanelSystem and SecuritySystem: which is in fact "System type: 64-bit Operating System".

    Read the article

  • Disabling Alt-F4 on a Win Forms NotifyIcon

    - by Jippers
    I am using a NotifyIcon from Win Forms to make a systray icon for my C# application. I have a bug where if the user right clicks the icon for the context menu, they can press Alt-F4 and the icon will disappear from the tray, but the Main WPF application is still running. This is especially a problem when they have "minimized to systray" and the only control of the application is now gone. Anyone know how to handle this specificially on the systray? I've looked at the NotifyIcon documentation and there isn't anything relating to keypress events.

    Read the article

  • Increase columns width in Silverlight DataGrid to fill whole DG width

    - by Henrik P. Hessel
    Hello, I have a DataGrid Control which is bound to a SQL Table. The XAML Code is: <data:DataGrid x:Name="dg_sql_data" Grid.Row="1" Visibility="Collapsed" Height="auto" Margin="0,5,5,5" AutoGenerateColumns="false" AlternatingRowBackground="Aqua" Opacity="80" > <data:DataGrid.Columns> <data:DataGridTextColumn Header="Latitude" Binding="{Binding lat}" /> <data:DataGridTextColumn Header="Longitude" Binding="{Binding long}" /> <data:DataGridTextColumn Header="Time" Binding="{Binding time}" /> </data:DataGrid.Columns> </data:DataGrid> Is it possible increase the single columns sizes to fill out the complete width of the datagrid? thx!, rAyt Edit: Columns with "*" as width are coming with the Silverlight SDK 4.

    Read the article

  • Why does this JSON fail only in iPhone?

    - by 4thSpace
    I'm using the JSON framework from http://code.google.com/p/json-framework. The JSON below fails with this error: -JSONValue failed. Error trace is: ( Error Domain=org.brautaset.JSON.ErrorDomain Code=5 UserInfo=0x124a20 "Unescaped control character '0xd'", Error Domain=org.brautaset.JSON.ErrorDomain Code=3 UserInfo=0x11bc20 "Object value expected for key: Phone", Error Domain=org.brautaset.JSON.ErrorDomain Code=3 UserInfo=0x1ac6e0 "Expected value while parsing array" ) JSON being parsed: [{"id" :"2422","name" :"BusinessA","address" :"7100 U.S. 50","lat" :"38.342945","lng" :"-90.390701","CityId" :"11","StateId" :"38","CategoryId" :"1","Phone" :"(200) 200-2000","zip" :"00010"}] I think 0xd represents a carriage. When I put the above JSON in TextWrangler, I don't see any carriage returns. I got the JSON by doing "po myjson" in the debugger. It passes this validator: http://json.parser.online.fr/. Can anyone see what the problem may be?

    Read the article

  • Intercept UITableView scroll touches

    - by Jonesy
    Is it possible to control when the UITableView scrolls in my own code. I am trying to get behaviour where a vertical swipe scrolls and a horizontal swipe gets passed through to my code (of which there are many example) BUT I want a DIAGONAL swipe to do nothing, i.e the UITableView should not even begin scrolling. I tried catching it in here - (void)scrollViewWillBeginDragging:(UIScrollView *)scrollView but the scrollView.contentOffset.x is always 0 so I cannot detect a horizontal movement. I also tried subclassing UITableView and implementing - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event etc.. but the UITableView (and I guess it's parent UIScrollView) start to scroll before the touches are notified? To re-iterate, I want the UITableView scrolling to be locked if a diagonal swipe is made, but to scroll vertically normally. (This behaviour can be seen in Tweetie(Twitter) for the iPhone) Thanks for any help!

    Read the article

  • ComboBox wpf not item not being selected

    - by Greg R
    I am trying to bind a combo box to a list of objects, and it works great, besides the selected value, am I missing somethign? <ComboBox ItemsSource="{Binding OrderInfoVm.AllCountries}" SelectedValuePath="country_code" DisplayMemberPath="country_name" SelectedValue="{Binding OrderInfoVm.BillingCountry}" /> Basically I want to bind value to country codes and set the selected value to the country code bound to OrderInfoVm.BillingCountry (which implements INotifyPropertyChanged) Initially when the control loads selected value is empty, but on click BillingCountry is populated. Selected value does not seem to change. How can I remedy that?

    Read the article

  • Custom UISwitch & App Store approval

    - by pix0r
    After doing some reading, I've found that you can customize the text and color on a UISwitch control. I'm curious if these methods will cause problems trying to get my app approved and included in the App Store. Sample code taken from iPhone Developer's Cookbook Sample Code: // Custom font, color switchView = [[UICustomSwitch alloc] initWithFrame:CGRectZero]; [switchView setCenter:CGPointMake(160.0f,260.0f)]; [switchView setLeftLabelText: @"Foo"]; [switchView setRightLabelText: @"Bar"]; [[switchView rightLabel] setFont:[UIFont fontWithName:@"Georgia" size:16.0f]]; [[switchView leftLabel] setFont:[UIFont fontWithName:@"Georgia" size:16.0f]]; [[switchView leftLabel] setTextColor:[UIColor yellowColor]];

    Read the article

  • TFSBuild.Proj and Manual SQL Server Work Help?

    - by ScSub
    Using the VS 2008 GDR update, I have created a database project. I have created a SQL Server deployment package. I have created a database unit test. Using some wizards, the stuff got into my tfsbuild.proj file so near the end of the automated build process a database is created. I lack a little control of the whole process, I now see. What I would like to do is manually deploy the DB, run 3 custom scripts against the DB, and then manually start the DB unit test. I have other non-DB unit tests that already run. I do not want to use VSMDI or ordered unit test stuff because in out multi-developer environment it gets messy. Help!

    Read the article

  • How to remove default header from WCF REST Outgoing Response?

    - by bmsantos
    The following C# WCF based REST service gives me some undesired headers that I'm not sure if I can remove them trough the API. The interface: [ServiceContract] public interface IControlSystem { [OperationContract] [WebGet] System.IO.Stream About(); } The implementation: public class ControlSystem : IControlSystem { public System.IO.Stream About() { return new System.IO.MemoryStream(ASCIIEncoding.Default.GetBytes("Hello World")); } } Out of a raw socket connection it gives the following response: HTTP/1.1 200 OK Server: ASP.NET Development Server/9.0.0.0 Date: Tue, 15 Jun 2010 13:12:51 GMT X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: application/octet-stream Content-Length: 39 Connection: Close Hello World Question is, is it possible to get the server to not report anything other than the actual message? Need it in some calls so due to some small embedded device clients that will try to connect to the server and I would like to minimize the amount of parsing. Thanks, B.

    Read the article

  • Examples of when to use PageAsyncTask (Asynchronous asp.net pages)

    - by Tony_Henrich
    From my understanding from reading about ASP.NET asynchronous pages, the method which executes when the asynchronous task begins ALWAYS EXECUTES between the prerender and the pre-render Complete events. So because the page's controls' events run between the page's load and prerender events, is it true that whatever the begin task handler (handler for BeginAsync below) produces, it can't be used in the controls' events? So for example, if the handler gets data from a database, the data can't be used in any of the controls' postback events? Would you bind data to a data control after prerender? PageAsyncTask pat = new PageAsyncTask(BeginAsync, EndAsync, null, null, true); this.RegisterAsyncTask(pat);

    Read the article

  • Can a struct become deallocated?

    - by brettr
    I've declared a struct in my header file like this: typedef struct { NSString *department; NSString *departmentId; } Department; Department currentDepartment; This struct is in a fairly simple class. I assign the struct values in viewDidLoad. Just before leaving viewDidLoad, I see the struct values are still there. After the user clicks a segment control, I reassign the struct values. Before assigning values, I see the two struct values are 0x0. I do have NSZombieEnabled, which is printing out this when I mouse over the struct while the app is running and one of my breakpoints have been hit: MyApp[25722:207] *** -[CFString _cfTypeID]: message sent to deallocated instance 0xfc0e90 I'm not creating an instance of the struct or deallocating it. How can it be getting deallocated?

    Read the article

  • Managed language for scientific computing software

    - by heisen
    Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?

    Read the article

  • CCSprite following a sequence of CGPoints

    - by pgb
    I'm developing a line drawing game, similar to Flight Control, Harbor Master, and others in the appstore, using Cocos2D. For this game, I need a CCSprite to follow a line that the user has drawn. I'm storing a sequence of CGPoint structs in an NSArray, based on the points I get in the touchesBegin and touchesMoved messages. I now have the problem of how to make my sprite follow them. I have a tick method, that is called at the framerate speed. In that tick method, based on the speed and current position of the sprite, I need to calculate its next position. Is there any standard way to achieve this? My current approach is calculate the line between the last "reference point" and the next one, and calculate the next point in that line. The problem I have is when the sprite "turns" (moves from one segment of the line to another). Any hint will be greatly appreciated.

    Read the article

  • PHP's SimpleXML not handling &#8217 ; properly

    - by Matty
    I'm parsing an RSS feed that has an &#8217; in it. SimpleXML turns this into a ’. What can I do to stop this? Just to answer some of the questions that have come up - I'm pulling an RSS feed using CURL. If I output this directly to the browser, the &#8217; displays as ’ which is what's expected. When I create a new SimpleXMLElement using this, (e.g. $xml = new SimpleXmlElement($raw_feed); and dump the $xml variable, every instance of &#8217; is replaced with ’. It appears that SimpleXML is having trouble with UTF-8 ampersand encoded characters. (The XML declaration specifies UTF-8.) I do have control over the feed after CURL has retrieved the feed before it's used to construct a SimpleXML element.

    Read the article

  • How significant is the bazaar performance factor?

    - by memodda
    I hear all this stuff about bazaar being slower than git. I haven't used too much distributed version control yet, but in Bazaar vs. Git on the bazaar site, they say that most complaints about performance aren't true anymore. Have you found this to be true? Is performance pretty much on par now? I've heard that speed can affect workflow (people are more likely to do good thing X if X is fast). What specific cases does performance currently affect workflow in bazaar vs other systems (especially git), and how? I'm just trying to get at why performance is of particular importance. Usually when I check something in or update it, I expect it to take a little while, but it doesn't matter. I commit/update when I have a second, so it doesn't interfere with my productivity. But then I haven't used DVCS yet, so maybe that has something to do with it?

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >