Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 763/1024 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • Error accessing uncompiled pages using ISA Server 2006 SP1

    - by Bravax
    We are in the processing of configuring a portal to use ISA Server as our front end security provider. So we are using ISA Server 2006 SP1. Unfortunately when we access .net applications through ISA Server, the first time they are accessed. i.e. They are not compiled yet, the following error appears: Error Code: 500 Internal Server Error. The parameter is incorrect. (87) In the ISA Monitoring logs, this shows: Failed Connection Attempt Log type: Web Proxy (Reverse) Status: 87 The parameter is incorrect. Once the application is compiled, the error never appears. Does anyone know how to resolve this, so the site works correctly the first time? Some additional information: The websites accessed are running on windows server 2008 64 bit - standard edition, and occurs for Sharepoint as well as standard .net websites. ISA Server is running on Windows server 2003 R2 SP2 Standard eidtion The firewall on the windows server 2008 box allows all access. (To rule this out.) Nothing odd appears in the IIS logs or firewall logs.

    Read the article

  • Call .NET Webservice with Android

    - by Lasse P
    Hi, I know this question has been asked here before, but I don't think those answers were adequate for my needs. We have a SOAP webservice that is used for an iPhone application, but it is possible that we need an Android specific version or a proxy of the service, so we have the option to go with either SOAP or JSON. I have a few concerns about both methods: SOAP solution: Is it possible to generate java source code from a WSDL file, if so, will it include some kind of proxy class to invoke the webservice and will it work in the Android environment at all? Google has not provided any SOAP library in Android, so i need to use 3rd party, any suggestion? What about the performance/overhead with parsing and transmitting SOAP xml over the wire versus the JSON solution? JSON solution: There is a few classes in the Android sdk that will let me parse JSON, but does it support generic parsing, like if I want the result to be parsed as a complex type? Or would I need to implement that myself? I have read about 2 libraries before here on Stackoverflow, GSON an Jackson. What is the difference performance and usability (from a developers perspective) wise? Do you guys have any experince with either of those libraries? So i guess the big question is, what method to go with? I hope you can help me out. Thanks in advance :-)

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Whose fault is a NullReferenceException?

    - by stefan.at.wpf
    I'm currently working on a class which exposes an internal List through a property. The List shall and can be modified. The problem is, entries in the internal list could be set to null from outside the class. My code actually looks like this: class ClassWithList { List<object> _list = new List<object>(); // get accessor, which however returns the reference to the list, // therefore the list can be modified (this is intended) public List<object> Data { get { return _list; } } private void doSomeWorkWithTheList() { foreach(object obj in _list) // do some work with the objects in the list without checking for null. } } So now in the doSomeWorkWithTheList() I could always check whether the current list entry is null or I could just asume that the person using this class doesn't have the great idea to set entries to null. So finally the questions end up in: Whose fault is a NullReferenceException in this case? Is it the fault of the class developer not checking everything for null (which would make code generally - not only in this class - more complex) or is it the fault of the user of this class, as setting a List entry to null doesn't really make sense? I'd tend to generally not check values for null except in some really special cases. Is this a bad style or de facto standard / standard in praxis? I know there's probably no ultimate answer for this, I'm just missing enough experience for such thing and therefore wondering what other developers think about such cases and want to hear what's done in reality about checking null (or not).

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • EF4 Code-First CTP5 Many-to-one

    - by Kevin McKelvin
    I've been trying to get EF4 CTP5 to play nice with an existing database, but struggling with some basic mapping issues. I have two model classes so far: public class Job { [Key, Column(Order = 0)] public int JobNumber { get; set; } [Key, Column(Order = 1)] public int VersionNumber { get; set; } public virtual User OwnedBy { get; set; } } and [Table("Usernames")] public class User { [Key] public string Username { get; set; } public string EmailAddress { get; set; } public bool IsAdministrator { get; set; } } And I have my DbContext class exposing those as IDbSet I can query my users, but as soon as I added the OwnedBy field to the Job class I began getting this error in all my tests for the Jobs: Invalid column name 'UserUsername'. I want this to behave like NHibernate's many-to-one, whereas I think EF4 is treating it as a complex type. How should this be done?

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Need a better way to execute console commands from python and log the results

    - by Wim Coenen
    I have a python script which needs to execute several command line utilities. The stdout output is sometimes used for further processing. In all cases, I want to log the results and raise an exception if an error is detected. I use the following function to achieve this: def execute(cmd, logsink): logsink.log("executing: %s\n" % cmd) popen_obj = subprocess.Popen(\ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = popen_obj.communicate() returncode = popen_obj.returncode if (returncode <> 0): logsink.log(" RETURN CODE: %s\n" % str(returncode)) if (len(stdout.strip()) > 0): logsink.log(" STDOUT:\n%s\n" % stdout) if (len(stderr.strip()) > 0): logsink.log(" STDERR:\n%s\n" % stderr) if (returncode <> 0): raise Exception, "execute failed with error output:\n%s" % stderr return stdout "logsink" can be any python object with a log method. I typically use this to forward the logging data to a specific file, or echo it to the console, or both, or something else... This works pretty good, except for three problems where I need more fine-grained control than the communicate() method provides: stdout and stderr output can be interleaved on the console, but the above function logs them separately. This can complicate the interpretation of the log. How do I log stdout and stderr lines interleaved, in the same order as they were output? The above function will only log the command output once the command has completed. This complicates diagnosis of issues when commands get stuck in an infinite loop or take a very long time for some other reason. How do I get the log in real-time, while the command is still executing? If the logs are large, it can get hard to interpret which command generated which output. Is there a way to prefix each line with something (e.g. the first word of the cmd string followed by :).

    Read the article

  • java.lang.IllegalArgumentException: View not attached to window manager

    - by alex2k8
    I have an activity that starts AsyncTask and shows progress dialog for the duration of operation. The activity is declared NOT be recreated by rotation or keyboard slide. <activity android:name=".MyActivity" android:label="@string/app_name" android:configChanges="keyboardHidden|orientation" > <intent-filter> </intent-filter> </activity> Once task completed, I dissmiss dialog, but on some phones (framework: 1.5, 1.6) such error is thrown: java.lang.IllegalArgumentException: View not attached to window manager at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:356) at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:201) at android.view.Window$LocalWindowManager.removeView(Window.java:400) at android.app.Dialog.dismissDialog(Dialog.java:268) at android.app.Dialog.access$000(Dialog.java:69) at android.app.Dialog$1.run(Dialog.java:103) at android.app.Dialog.dismiss(Dialog.java:252) at xxx.onPostExecute(xxx$1.java:xxx) My code is: final Dialog dialog = new AlertDialog.Builder(context) .setTitle("Processing...") .setCancelable(true) .create(); final AsyncTask<MyParams, Object, MyResult> task = new AsyncTask<MyParams, Object, MyResult>() { @Override protected MyResult doInBackground(MyParams... params) { // Long operation goes here } @Override protected void onPostExecute(MyResult result) { dialog.dismiss(); onCompletion(result); } }; task.execute(...); dialog.setOnCancelListener(new OnCancelListener() { @Override public void onCancel(DialogInterface arg0) { task.cancel(false); } }); dialog.show(); From what I have read (http://bend-ing.blogspot.com/2008/11/properly-handle-progress-dialog-in.html) and seen in Android sources, it looks like the only possible situation to get that exception is when activity was destroyed. But as I have mentioned, I forbid activity recreation for basic events. So any suggestions are very appreciated.

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Char C question about encoding signed/unsigned.

    - by drigoSkalWalker
    Hi guys. I read that C not define if a char is signed or unsigned, and in GCC page this says that it can be signed on x86 and unsigned in PowerPPC and ARM. Okey, I'm writing a program with GLIB that define char as gchar (not more than it, only a way for standardization). My question is, what about UTF-8? It use more than an block of memory? Say that I have a variable unsigned char *string = "My string with UTF8 enconding ~ çã"; See, if I declare my variable as unsigned I will have only 127 values (so my program will to store more blocks of mem) or the UTF-8 change to negative too? Sorry if I can't explain it correctly, but I think that i is a bit complex. NOTE: Thanks for all answer I don't understand how it is interpreted normally. I think that like ascii, if I have a signed and unsigned char on my program, the strings have diferently values, and it leads to confuse, imagine it in utf8 so.

    Read the article

  • How is this Perl code selecting two different elements from an array?

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • Regarding the ViewModel

    - by mizipzor
    Im struggling to understand the ViewModel part of the MVVM pattern. My current approach is to have a class, with no logic whatsoever (important), except that it implements INotifyPropertyChanged. The class is just a collection of properties, a struct if you like, describing an as small part of the data as possible. I consider this my Model. Most of the WPF code I write are settings dialogs that configure said Model. The code-behind of the dialog exposes a property which returns an instance of the Model. In the XAML code I bind to subproperties of that property, thereby binding directly to the Model's properties. Which works quite well since it implements the INotifyPropertyChanged. I consider this settings dialog the View. However, I havent really been able to figure out what in all this is the ViewModel. The articles Ive read suggests that the ViewModel should tie the View and the Model together, providing the logic the Model lacks but is still to complex to go directly into the View. Is this correct? Would, in my example, the code-behind of the settings dialog be considered the ViewModel? I just feel a bit lost and would like my peers to debunk some of my assumptions. Am I completely off track here?

    Read the article

  • Why is my PHP upload script not working?

    - by Turner
    Hello all, I am doing some simple work with uploading a file. I am ignoring error checking and exceptions at this point just to get my uploads working. I have this HTML form: <form action='addResult.php' method='post' enctype='multipart/form-data' target='results_iFrame' onsubmit='startUpload();'> Entry: <input type='text' id='entry' /> Stop: <input type='text' id='stop' /> Final: <input type='text' id='final' /> Chart: <input type='file' id='chart' /> <input type='submit' value='Add' /></form> As you can see, it calls 'addResult.php' within the iFrame 'results_iFrame'. The Javascript is just for animation purposes and to tell me when things are finished. addResult.php has this code in it (along with processing the other inputs): $upload_dir = "../img/"; $chart_loc = $upload_dir.basename($_FILES['chart']['name']); move_uploaded_file($_FILES['chart']['tmp_name'], $chart_loc); print_r($_FILES); It uses the 'chart' input from the form and tries to upload it. I have the print_r() function to display some information on $_FILES, but the array is empty, thus making this fail. What could I be doing wrong?

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • How to systematically generate images from data?

    - by adamvickers
    I work for a performing arts nonprofit. We have seating charts for each of the theaters we work with; each seating chart shows the number of sections, the shape of each section, and the number of rows in each section. We'd like to create dynamic seating charts based on this info. We'd like them to look/feel kinda like this: http://www.fansnap.com/tickets/177754-on. But the tricky part is we'd like to be able to store all the info about each theater (the section names, shape/size of each section, and number of rows in each section) as data and then build a system that reads this data and uses it to create a dynamic map. I'm a life-long web developer, but I don't have have any experience with a difficult graphics problem like this. I realize it's a complex problem and I don't expect anyone to give me a complete answer here, but I would love direction on where I should be looking for more info. Is what I'm describing possible? Does this sort of technique have a name? Where can I learn more about how to accomplish this? What software should I use? Any info would be helpful.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • Help improve this Javascript code?

    - by Galilyou
    Hello SO, In short, I'm dealing with Telerik's RadTreeView, and I want enable checking all the child nodes if the user checked the parent node. Simple enough! OK here's my code that handles OnClientNodeChecked event of the TreeView: function UpdateAllChildren(nodes, checked) { var i; for (i = 0; i < nodes.get_count(); i++) { var currentNode = nodes.getNode(i); currentNode.set_checked(checked); alert("now processing: " + currentNode.get_text()); if (currentNode.get_nodes().get_count() > 0) { UpdateAllChildren(currentNode.get_nodes(), checked); } } } function ClientNodeChecked(sender, eventArgs) { var node = eventArgs.get_node(); UpdateAllChildren(node.get_nodes(), node.get_checked()); } And here's the TreeView's markup: <telerik:RadTreeView ID="RadTreeView1" runat="server" CheckBoxes="True" OnClientNodeChecked="ClientNodeChecked"></telerik:RadTreeView> The tree contains quite a lot of nodes, and this is causing my targeted browser (ehm, that's IE7) to really slow down while running it. Furthermore IE7 displays an error message asking me to stop the page from running scripts as it's might make my computer not responsive (yeah, scary enough). So what do you guys propose to optimize this code? Thanks in advance

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • How do I update a webpage with the progress of a server-side task?

    - by Jim B
    Hi everyone, I'm working on a web project that takes the results from a survey type application, and runs them through a bunch of calculations to come up with some recommended suggestions for the user. Now, this calculation might take a minute or so, so I'd like to be able to give the user some update on it's progress. Obviously, the quick and dirty solution would be to put up a message along the lines of "Please wait while we calculate your recommendations" with a spinning gear type graphic. (or whatever, you get the point..). Once the task completes, I'd redirect to the results page. However, I'd like to do something a little more flashy. Maybe something along the lines of a progress bar, and even prompt the user with what's going on in the background. For example, give them a progress bar, with some text that says "Now processing suggestion 3 of 15; Multi-Vitamin" Any suggestions on how I could set this up? One way I'm thinking of doing it is to write the progress of the calculation method to the HttpContext, and slap up an update panel and timer that would show/refresh this info. I've also checked out maybe building a web service/method, and then poll that at some interval. Has anybody done something similar to this before? What worked for you? Thanks again! ~Jim

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Use properties or methods to expose business rules in C#?

    - by Val
    I'm writing a class to encapsulate some business rules, each of which is represented by a boolean value. The class will be used in processing an InfoPath form, so the rules get the current program state by looking up values in a global XML data structure using XPath operations. What's the best (most idiomatic) way to expose these rules to callers -- properties or public methods? Call using properties Rules rules = new Rules(); if ( rules.ProjectRequiresApproval ) { // get approval } else { // skip approval } Call using methods Rules rules = new Rules(); if ( rules.ProjectRequiresApproval() ) { // get approval } else { // skip approval } Rules class exposing rules as properties public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule property public bool ProjectRequiresApproval { get { return _amount < threshold } } } Rules class exposing rules as methods public class Rules() { private int _amount; private int threshold = 100; public Rules() { _amount = someExpensiveXpathOperation; } // rule method public bool ProjectRequiresApproval() { return _amount < threshold; } } What are the pros and cons of one over the other?

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >