Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 163/198 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Is a control's OnInit called even when attaching it during parent's OnPreRender?

    - by Xerion
    My original understanding was that the asp.net page lifecycle is run once for all pages and controls under normal circumstances. When I attached a control during a container's OnPreRender, I encountered a situation where the control's OnInit was not called. OK, I considered that a bug in my code and fixed as such, by attaching the control earlier. But just today, I encountered a situation where OnInit for a control seems to be called after the normal OnInit has been done for everyone else. See stack below. It seems that during the page's PreRender, the control's OnInit is called as it is being dynamically added. So I just want to confirm exactly what ASP.NET's behavior is? Does it actually keep track of the stage of each control's lifecycle, and upon adding a new control, it will run from the very beginning? [HttpException (0x80004005): The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases.] System.Web.UI.ControlCollection.Add(Control child) +8678663 MyCompany.Web.Controls.SetStartPageWrapper.Initialize() MyCompany.Web.Controls.SetStartPageWrapper.OnInit(EventArgs e) System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.AddedControl(Control control, Int32 index) +198 System.Web.UI.ControlCollection.Add(Control child) +80 MyCompany.Web.Controls.PageHeader.OnPreRender(EventArgs e) in System.Web.UI.Control.PreRenderRecursiveInternal() +80 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Control.PreRenderRecursiveInternal() +171 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +842

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • Using pointers to adjust global objects in objective-c

    - by Rob
    Ok, so I am working with two sets of data that are extremely similar, and at the same time, these data sets are both global NSMutableArrays within the object. data_set_one = [[NSMutableArray alloc] init]; data_set_two = [[NSMutableArray alloc] init]; Two new NSMutableArrays are loaded, which need to be added to the old, existing data. These Arrays are also global. xml_dataset_one = [[NSMutableArray alloc] init]; xml_dataset_two = [[NSMutableArray alloc] init]; To reduce code duplication (and because these data sets are so similar) I wrote a void method within the class to handle the data combination process for both Arrays: -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str Now, I have a decent understanding of object oriented programming, so I was thinking that if I were to invoke the method with the global Arrays in the data like so... [self constructData:data_set_one fromDownloadArray:xml_dataset_one withMatchSelector:@"id"]; Then the global NSMutableArrays (data_set_one) would reflect the changes that happen to "array" within the method. Sadly, this is not the case, data_set_one doesn't reflect the changes (ex: new objects within the Array) outside of the method. Here is a code snippet of the problem // data_set_one is empty // xml_dataset_one has a few objects [constructData:(NSMutableArray *)data_set_one fromDownloadArray:(NSMutableArray *)xml_dataset_one withMatchSelector:(NSString *)@"id"]; // data_set_one should now be xml_dataset_one, but when echoed to screen, it appears to remain empty And here is the gist of the code for the method, any help is appreciated. -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str { if ([data count] == 0) { data = down; // set data equal to downloaded data } else if ([down count] == 0) { // download yields no results, do nothing } else { // combine the two arrays here } } This project is not ARC enabled. Thanks for the help guys! Rob

    Read the article

  • gluLookAt vectors and FPS-style camera

    - by Kevin Pamplona
    I am attempting to implemented an FPS-style camera by updating three vectors: EYE, DIR, UP. These vectors are the same that are used by gluLookAt (since gluLookAt is specified by the position of the camera, the direction it is looking at, and an up vector). I have already implemented the left-right and up-down strafing movements, but I'm having a lot of trouble understanding the math behind making the camera look-around while remaining stationary. In this case, the EYE vector remains the same, while I must update DIR and UP. Below is the code I tried, but it doesn't seem to work properly. Any suggestions? Thanks. void Transform::left(float degrees, vec3& dir, vec3& up) { vec3 axis; axis = glm::normalize(up); mat3 R = rotate(-degrees, axis); dir = R*dir; dir = R*up; }; void Transform::up(float degrees, vec3& dir, vec3& up) { vec3 axis; axis=glm::normalize(glm::cross(dir,up)); mat3 R = rotate(-degrees, axis); dir = R*dir-; up = R*up; };

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • PHP, is_array() generating an "Undefined property: stdClass::$crit " error

    - by Brook Julias
    is_array($src2->crit) is generating an "Undefined property: stdClass::$crit" error. The line throwing the error is: if(is_array($src2->crit) && count($src->crit) > 0){ $src2->crit is initialized here. $src2->crit = array(); $src2->crit[0] = new dataSet(); $src2->crit[0]->tblName = $tbl2; $src2->crit[0]->colName = "ID"; $src2->crit[0]->val = $elm->editID; When testing $src2->crit with this code. print("\$src->crit is a ".$src->crit."<br />"); print_r($src->crit); print("<br />"); This is returned. $src2->crit is a Array Array ( [0] => dataSet Object ( [tblName] => sExam [colName] => ID [val] => 10 ) ) What am I not seeing/understanding correctly? If print("\$src2->crit is a ".$src->crit."<br />") returns that it is an array then why is is_array($src2->$crit) generating an error?

    Read the article

  • Any danger in calling flash messages html_safe?

    - by PreciousBodilyFluids
    I want a flash message that looks something like: "That confirmation link is invalid or expired. Click here to have a new one generated." Where "click here" is of course a link to another action in the app where a new confirmation link can be generated. Two drawbacks: One, since link_to isn't defined in the controller where the flash message is being set, I have to put the link html in myself. No big deal, but kind of messy. Number two: In order for the link to actually display properly on the page I have to html_safe the flash display function in the view, so now it looks like (using Haml): - flash.each do |name, message| = content_tag :div, message.html_safe This gives me pause. Everything else I html_safe has been HTML I've written myself in helpers and whatnot, but the contents of the flash hash are stored in a cookie client-side, and could conceivably be changed. I've thought through it, and I don't see how this could result in an XSS attack, but XSS isn't something I have a great understanding of anyway. So, two questions: 1. Is there any danger in always html_safe-ing all flash contents like this? 2. The fact that this solution is so messy (breaking MVC by using HTML in the controller, always html_safe-ing all flash contents) make me think I'm going about this wrong. Is there a more elegant, Rails-ish way to do this? I'm using Rails 3.0.0.beta3.

    Read the article

  • Generic InBetween Function.

    - by Luiscencio
    I am tired of writing x > min && x < max so i wawnt to write a simple function but I am not sure if I am doing it right... actually I am not cuz I get an error: bool inBetween<T>(T x, T min, T max) where T:IComparable { return (x > min && x < max); } errors: Operator '>' cannot be applied to operands of type 'T' and 'T' Operator '<' cannot be applied to operands of type 'T' and 'T' may I have a bad understanding of the where part in the function declaring note: for those who are going to tell me that I will be writing more code than before... think on readability =) any help will be appreciated EDIT deleted cuz it was resolved =) ANOTHER EDIT so after some headache I came out with this (ummm) thing following @Jay Idea of extreme readability: public static class test { public static comparision Between<T>(this T a,T b) where T : IComparable { var ttt = new comparision(); ttt.init(a); ttt.result = a.CompareTo(b) > 0; return ttt; } public static bool And<T>(this comparision state, T c) where T : IComparable { return state.a.CompareTo(c) < 0 && state.result; } public class comparision { public IComparable a; public bool result; public void init<T>(T ia) where T : IComparable { a = ia; } } } now you can compare anything with extreme readability =) what do you think.. I am no performance guru so any tweaks are welcome

    Read the article

  • Question about how to implement a c# host application with a plugin-like architecture

    - by devoured elysium
    I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services. My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing. On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it. What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this? Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application. Thanks

    Read the article

  • SSL signed certificates for internal use

    - by rogueprocess
    I have a distributed application consisting of many components that communicate over TCP (for examle JMS) and HTTP. All components run on internal hardware, with internal IP addresses, and are not accessible to the public. I want to make the communication secure using SSL. Does it make sense to purchase signed certificates from a well-known certificate authority? Or should I just use self-signed certs? My understanding of the advantage of trusted certs is that the authority is an entity that can be trusted by the general public - but that is only an issue when the general public needs to be sure that the entity at a particular domain is who they say they are. Therefore, in my case, where the same organization is responsible for the components at both ends of the communication, and everything in between, a publicly trusted authority would be pointless. In other words, if I generate and sign a certificate for my own server, I know that it's trustworthy. And no one from outside the organization will ever be asked to trust this certificate. That is my reasoning - am I correct, or is there some potential advantage to using certs from a known authority?

    Read the article

  • How can the Three-Phase Commit Protocol (3PC) guarantee atomicity?

    - by AndiDog
    I'm currently exploring worst case scenarios of atomic commit protocols like 2PC and 3PC and am stuck at the point that I can't find out why 3PC can guarantee atomicity. That is, how does it guarantee that if cohort A commits, cohort B also commits? Here's the simplified 3PC from the Wikipedia article: Now let's assume the following case: Two cohorts participate in the transaction (A and B) Both do their work, then vote for commit Coordinator now sends precommit messages... A receives the precommit message, acknowledges, and then goes offline for a long time B doesn't receive the precommit message (whatever the reason might be) and is thus still in "uncertain" state The results: Coordinator aborts the transaction because not all precommit messages were sent and acknowledged successfully A, who is in precommit state, is still offline, thus times out and commits B aborts in any case: He either stays offline and times out (causes abort) or comes online and receives the abort command from the coordinator And there you have it: One cohort committed, another aborted. The transaction is screwed. So what am I missing here? In my understanding, if the automatic commit on timeout (in precommit state) was replaced by infinitely waiting for a coordinator command, that case should work fine.

    Read the article

  • 'WebException' error on back button even when calling 'void' async method

    - by BlazingFrog
    I have a windows phone app that allows the user to interact with it. Each interaction will always result in an async WCF call. In addition to that, some interactions will result in opening the browser, maps, email, etc... The problem is that, when hitting the back button, I sometime get the following error "An error (WebException) occurred while transmitting data over the HTTP channel." with the following stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.OnGetResponse(IAsyncResult result) at System.Net.Browser.ClientHttpWebRequest.<>c__DisplayClassa.<InvokeGetResponseCallback>b__8(Object state2) at System.Threading.ThreadPool.WorkItem.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() My understanding is that it's happening because my app opened another app (browser, maps, etc) before it had the time to execute the EndMyAsyncMethod(System.IAsyncResult result). Fair enough... What's really annoying is that it seems it should get fixed by cloning the server-side method, only making it void with the following operation contract [OperationContract(IsOneWay = true)] but I'm still getting the error. What's worse is that the exception is thrown in a system-generated part of the code and, thus, cannot be manually caught causing the app to just crash. I simply don't understand the need to execute an Endxxx method when it's explicitely marked as OneWay and void. EDIT I did find a similar issue here. It does seem that it is related to the message getting to the service (not the client callback). My next question is: if I'm now calling a method marked AsyncPattern and OneWay, what exactly should I be waiting for on the client to be sure the message was transmitted successfully? This is new service definition: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s); void EndCacheQueryWithoutCallback(IAsyncResult r); And the implementation: public IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s) { // do some stuff return new CompletedAsyncResult<string>(""); } public void EndCacheQueryWithoutCallback(IAsyncResult r) { }

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • Where is the "ListViewItemPlaceholderBackgroundThemeBrush" located?

    - by Dimi Toulakis
    I have a problem understanding one style definition in Windows 8 metro apps. When you create a metro style application with VS, there is also a folder named Common created. Inside this folder there is file called StandardStyles.xaml Now the following snippet is from this file: <!-- Grid-appropriate 250 pixel square item template as seen in the GroupedItemsPage and ItemsPage --> <DataTemplate x:Key="Standard250x250ItemTemplate"> <Grid HorizontalAlignment="Left" Width="250" Height="250"> <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}"> <Image Source="{Binding Image}" Stretch="UniformToFill"/> </Border> <StackPanel VerticalAlignment="Bottom" Background="{StaticResource ListViewItemOverlayBackgroundThemeBrush}"> <TextBlock Text="{Binding Title}" Foreground="{StaticResource ListViewItemOverlayForegroundThemeBrush}" Style="{StaticResource TitleTextStyle}" Height="60" Margin="15,0,15,0"/> <TextBlock Text="{Binding Subtitle}" Foreground="{StaticResource ListViewItemOverlaySecondaryForegroundThemeBrush}" Style="{StaticResource CaptionTextStyle}" TextWrapping="NoWrap" Margin="15,0,15,10"/> </StackPanel> </Grid> </DataTemplate> What I do not understand here is the static resource definition, e.g. for the Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}" It is not about how you work with templates and binding and resources. Where is this ListViewItemPlaceholderBackgroundThemeBrush located? Many thanks for your help. Dimi

    Read the article

  • What are the common compliance standards for software products?

    - by Jay
    This is a very generic question about software products. I would like to know what compliance standards are applicable to any software product. I know that question gives away nothing. So, here is an example to what I am referring to. CiSecurity Security Certification/Compliance lists out products ceritified by them to be compliant to the standards published at their website, i.e, cisecurity.org. Compliance could be as simple as answering a questionnaire for your product and approved by a thirdparty like cisecurity or it could apply to your whole organization, for instance, PCI-DSS compliance. I would be very interested in knowing the standards that products you know/designed/created, comply to. To give you the context behind this question: I am the developer of a data-masking tool. The said tool helps mask onscreen html text in a banking web application using filters. So, for instance, if the bank application lists out user information with ssn, my product when integrated with the banking product, automatically identifies ssn pattern and masks it into a pre-defined format.So, I have my product marketing team wanting more buzz words like compliance to be able to sell it to more banking clients. Hence, understanding "compliances that apply to products" is a key research item for me at this point. By which I meant, security compliances. Appreciate all your help and suggestions.

    Read the article

  • Swing JTable Scrolling not working properly

    - by Marko
    I'm doing an application, which uses Swing JTable. I used drag and drop in NetBeans, to add the JTable. When I add the JTable, JScrollPane is added automaticly. All the look is done with drag and drop. By pressing the first button, I set the number of rows in the table. This is the code to set my DataModel int size = 50; String[] colNames = {"Indeks", "Ime", "Priimek", "Drzava"}; DefaultTableModel model = new DefaultTableModel(size, colNames.length); model.setColumnIdentifiers(colNames); table.setModel(model); So if the size is, let's say 10, it works OK, since there is no scroll bar. When I add 50 empty rows, when trying to scroll for 5 seconds, it doesn't work properly and crashes in some time. I added the picture, for better understanding (this is what happens to the rows when I scroll up and down for a while). What could be wrong? Am I not using the DataModel as it is supposed to be used, should I .revalidate() or .repaint() the JTable?

    Read the article

  • C# iterator is executed twice when composing two IEnumerable methods

    - by achristoph
    I just started learning about C# iterator but got confused with the flow of the program after reading the output of the program. The foreach with uniqueVals seems to be executed twice. My understanding is that the first few lines up to the line before "Nums in Square: 3" should not be there. Can anyone help to explain why this happens? The output is: Unique: 1 Adding to uniqueVals: 1 Unique: 2 Adding to uniqueVals: 2 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Nums in Square: 3 Unique: 1 Adding to uniqueVals: 1 Square: 1 Number returned from Unique: 1 Unique: 2 Adding to uniqueVals: 2 Square: 2 Number returned from Unique: 4 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Square: 3 Number returned from Unique: 9 static class Program { public static IEnumerable<T> Unique<T>(IEnumerable<T> sequence) { Dictionary<T, T> uniqueVals = new Dictionary<T, T>(); foreach (T item in sequence) { Console.WriteLine("Unique: {0}", item); if (!uniqueVals.ContainsKey(item)) { Console.WriteLine("Adding to uniqueVals: {0}", item); uniqueVals.Add(item, item); yield return item; Console.WriteLine("After Unique yield: {0}", item); } } } public static IEnumerable<int> Square(IEnumerable<int> nums) { Console.WriteLine("Nums in Square: {0}", nums.Count()); foreach (int num in nums) { Console.WriteLine("Square: {0}", num); yield return num * num; Console.WriteLine("After Square yield: {0}", num); } } static void Main(string[] args) { var nums = new int[] { 1, 2, 2, 3 }; foreach (int num in Square(Unique(nums))) Console.WriteLine("Number returned from Unique: {0}", num); Console.Read(); } }

    Read the article

  • How is a functional programming-based javascript app laid out?

    - by user321521
    I've been working with node.js for awhile on a chat app (I know, very original, but I figured it'd be a good learning project). Underscore.js provides a lot of functional programming concepts which look interesting, so I'd like to understand how a functional program in javascript would be setup. From my understanding of functional programming (which may be wrong), the whole idea is to avoid side effects, which are basically having a function which updates another variable outside of the function so something like var external; function foo() { external = 'bar'; } foo(); would be creating a side effect, correct? So as a general rule, you want to avoid disturbing variables in the global scope. Ok, so how does that work when you're dealing with objects and what not? For example, a lot of times, I'll have a constructor and an init method that initializes the object, like so: var Foo = function(initVars) { this.init(initVars); } Foo.prototype.init = function(initVars) { this.bar1 = initVars['bar1']; this.bar2 = initVars['bar2']; //.... } var myFoo = new Foo({'bar1': '1', 'bar2': '2'}); So my init method is intentionally causing side effects, but what would be a functional way to handle the same sort of situation? Also, if anyone could point me to either a python or javascript source code of a program that tries to be as functional as possible, that would also be much appreciated. I feel like I'm close to "getting it", but I'm just not quite there. Mainly I'm interested in how functional programming works with traditional OOP classes concept (or does away with it for something different if that's the case).

    Read the article

  • how to return a list using SwingWorker

    - by Ender
    I have an assignment where i have to create an Image Gallery which uses a SwingWorker to load the images froma a file, once the image is load you can flip threw the image and have a slideshow play. I am having trouble getting the list of loaded images using SwingWorker. This is what happens in the background it just publishes the results to a TextArea // In a thread @Override public List<Image> doInBackground() { List<Image> images = new ArrayList<Image>(); for (File filename : filenames) { try { //File file = new File(filename); System.out.println("Attempting to add: " + filename.getAbsolutePath()); images.add(ImageIO.read(filename)); publish("Loaded " + filename); System.out.println("Added file" + filename.getAbsolutePath()); } catch (IOException ioe) { publish("Error loading " + filename); } } return images; } } when it is done I just insert the images in a List<Image> and that is all it does. // In the EDT @Override protected void done() { try { for (Image image : get()) { list.add(image); } } catch (Exception e) { } } Also I created an method that returns the list called getImages() what I need to get is the list from getImages() but doesn't seam to work when I call execute() for example MySwingWorkerClass swingworker = new MySwingWorkerClass(log,list,filenames); swingworker.execute(); imageList = swingworker.getImage() Once it reaches the imageList it doesn't return anything the only way I was able to get the list was when i used the run() instead of the execute() is there another way to get the list or is the run() method the only way?. or perhaps i am not understanding the Swing Worker Class.

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Declaration, allocation and assignment of an array of pointers to function pointers

    - by manneorama
    Hello Stack Overflow! This is my first post, so please be gentle. I've been playing around with C from time to time in the past. Now I've gotten to the point where I've started a real project (a 2D graphics engine using SDL, but that's irrelevant for the question), to be able to say that I have some real C experience. Yesterday, while working on the event system, I ran into a problem which I couldn't solve. There's this typedef, //the void parameter is really an SDL_Event*. //but that is irrelevant for this question. typedef void (*event_callback)(void); which specifies the signature of a function to be called on engine events. I want to be able to support multiple event_callbacks, so an array of these callbacks would be an idea, but do not want to limit the amount of callbacks, so I need some sort of dynamic allocation. This is where the problem arose. My first attempt went like this: //initial size of callback vector static const int initial_vecsize = 32; //our event callback vector static event_callback* vec = 0; //size static unsigned int vecsize = 0; void register_event_callback(event_callback func) { if (!vec) __engine_allocate_vec(vec); vec[vecsize++] = func; //error here! } static void __engine_allocate_vec(engine_callback* vec) { vec = (engine_callback*) malloc(sizeof(engine_callback*) * initial_vecsize); } First of all, I have omitted some error checking as well as the code that reallocates the callback vector when the number of callbacks exceed the vector size. However, when I run this code, the program crashes as described in the code. I'm guessing segmentation fault but I can't be sure since no output is given. I'm also guessing that the error comes from a somewhat flawed understanding on how to declare and allocate an array of pointers to function pointers. Please Stack Overflow, guide me.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >