Search Results

Search found 17013 results on 681 pages for 'hard coding'.

Page 592/681 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Let multiple highcharts charts appear automatically from mysql data

    - by martini1993
    I have the following problem. I want to make multiple Highcharts webcharts appear automatically based on the data from the database. Let's say we have the following database: ___________________________________________________________________ | | | | | | | | Year | Month | ID | Name User | Wins | Losses | |_______|___________|______|_______________|____________|__________| | 2013 1 21 Tony Stark 3 12 | | 2013 1 52 Bruce Wayne 5 4 | | 2013 1 76 Clark Kent 9 5 | |__________________________________________________________________| (This database is an example, there are a lot more rows in the real database.) And i have the following query: SELECT a.year AS year1, a.month AS month1, a.id AS id, a.name AS nameuser, a.wins AS wins, a.losses AS losses FROM Sales a WHERE a.month = 1 AND a.year = YEAR(NOW()) With this, it is very easy to hardcode a chart with Highcharts. But what I want is that there has to be a webchart per user. So instead of a single webchart with all the users in it, I want multiple charts next to each other based on the data from the database. So instead of this: http://jsfiddle.net/CWSb6/ I want this (But then next to each other): http://jsfiddle.net/DReMD/ It has to be generated automatically with php and mysql. So if there is a new user starting this month, and the new user is saved in the database, the page automatically displays the new user with the related web chart. I find this very hard to accomplish and I need some help to get to the right direction for the solution. Many thanks in advance! (Sorry for my bad english.)

    Read the article

  • Intellisense on custom types in Iron Python

    - by Anish Patel
    Hi everybody, I'm just starting to play around with IronPython and am having a hard time using it with custom types created in C#. I can get IronPython to load in assemblies from C# classes, but I'm struggling without the help of intellisense. If I have a class in C# as defined below, how can I make it so that IronPython will be able to see the methods/properties that are available in it? public class Person { public string Name { get; set; } public int Age{ get; set; } public double Weight{ get; set; } public double Height { get; set; } public double CalculateBMI() { return Weight/Math.Pow(Height, 2); } } In Iron python I'd instance a Person object as follows: newPerson = Person() newPerson.Name = 'John' newPerson.Age = 25 newPerson.Weight = 75 newPerson.Height = 1.70 newPerson.CalculateBMI() The thing that is annoying me is that I want to be able to say newPerson = Person() And then be able to see all the methods and properties associated with the person object whenever I type: newPerson. Anyone have any ideas if this can be done?

    Read the article

  • Cocoa/AppleScript move file

    - by bogdan
    I have a list of file paths and a destination path. I need something (AppleScript, Cocoa) that will move the files from one location to an other. I first tried using the following AppleScript, just to see what happens: set the_folder to (choose folder) tell application "Finder" move selection to the_folder end tell The problem is that it just blindly tries to move a file, nothing like the way Finder actually moves files (i.e. if a file with that name already exists, the AppleScript just throws an error, while Finder would ask you if you want to replace the file). The solution I came up with involved NSFileManager. I won't post the code because it's quite long, but basically I just check if the file already exists before trying to move, and if it exists a NSAlert with Replace/Cancel buttons appear. I have 2 remaining problems: Authorization - if you try to do something to files where you don't have access, the Finder would ask you to authorize. My code just fails... Moving to external drives - when you try to move a file to a different drive, NSFileManager copies the file and then deletes the original. The problem is that NSFileManager doesn't provide anything which I could use to display a progress indicator of what's happening during the copy. Is there anything I could use that is able to move files without these problems? The way I see it, I'm pretty much stuck with checking if the files are writable by the current user and authorize NSFileManager if not (from my understanding of the Authorization Services, this will be quite hard to implement). Oh and, I would also need to check if the destination is on the same drive and if not, implement something with FSCopyObjectAsync so that it shows a progress indicator... Thanks!

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Java Client .class File Protection

    - by Zac
    I am in the requirements phase of building a JEE application that will most likely run on a GlassFish/JBoss backend (doesn't matter for now). I know I shouldn't be thinking about architecture at requirements time, but one can't help but start to imagine how the components would all snap together :-) Here are some hard, non-flexible requirements on the client-side: (1) The client application will be a Swing box (2) The client is free to download, but will use a subscription model (thus requiring a login mechanism with server-side authentication/authorization, etc.) (3) Yes, Java is the best platform solution for the problem at hand for reasons outside the scope of this post (4) The client-side .class files need safeguarding against decompiling That last (4th) requirement is the basis of this post. I'm not really worried about someone actually decompiling and getting at my source code: in the end, it's just Swing controls driven by some lightweight business logic. I'm worried about a scenario where someone decompiles my code, modifies it to exploit/attack the server, re-compiles, and fires it up. I've envisioned all sorts of nasty solutions, but didn't know if this was a common problem with a common solution for JEE developers. Any thoughts? Not interested in "code obfuscation" techniques! Thanks for any input!

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • JavaScript Exception/Error Handling Not Working

    - by Seán Hayes
    This might be a little hard to follow. I've got a function inside an object: f_openFRHandler: function(input) { console.debug('f_openFRHandler'); try{ //throw 'foo'; DragDrop.FileChanged(input); //foxyface.window.close(); } catch(e){ console.error(e); jQuery('#foxyface_open_errors').append('<div>Max local storage limit reached, unable to store new images in your browser. Please remove some images and try again.</div>'); } }, inside the try block it calls: this.FileChanged = function(input) { // FileUploadManager.addFileInput(input); console.debug(input); var files = input.files; for (var i = 0; i < files.length; i++) { var file = files[i]; if (!file.type.match(/image.*/)) continue; var reader = new FileReader(); reader.onload = (function(f, isLast) { return function(e) { if (files.length == 1) { LocalStorageManager.addImage(f.name, e.target.result, false, true); LocalStorageManager.loadCurrentImage(); //foxyface.window.close(); } else { FileUploadManager.addFileData(f, e.target.result); // add multiple files to list if (isLast) setTimeout(function() { LocalStorageManager.loadCurrentImage() },100); } }; })(file, i == files.length - 1); reader.readAsDataURL(file); } return true; LocalStorageManager.addImage calls: this.setItem = function(data){ localStorage.setItem('ImageStore', $.json_encode(data)); } localStorage.setItem throws an error if too much local storage has been used. I want to catch that error in f_openFRHandler (first code sample), but it's being sent to the error console instead of the catch block. I tried the following code in my Firebug console to make sure I'm not crazy and it works as expected despite many levels of function nesting: try{ (function(){ (function(){ throw 'foo' })() })() } catch(e){ console.debug(e) } Any ideas?

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • How do I update a progress bar in Cocoa during a long running loop?

    - by Nic
    Hi, I've got a while loop, that runs for many seconds and that's why I want to update a progress bar (NSProgressIndicator) during that process, but it updates only once after the loop has finished. The same happens if I want to update a label text, by the way. I believe, my loop prevents other things of that application to happen. There must be another technique. Does this have to do with threads or something? Am I on the right track? Can someone please give me a simple example, how to “optimize” my application? My application is a Cocoa Application (Xcode 3.2.1) with these two methods in my Example_AppDelegate.m: // This method runs when a start button is clicked. - (IBAction)startIt:(id)sender { [progressbar setDoubleValue:0.0]; [progressbar startAnimation:sender]; running = YES; // this is a instance variable int i = 0; while (running) { if (i++ = processAmount) { // processAmount is something like 1000000 running = NO; continue; } // Update progress bar double progr = (double)i / (double)processAmount; NSLog(@"progr: %f", progr); // Logs values between 0.0 and 1.0 [progressbar setDoubleValue:progr]; [progressbar needsDisplay]; // Do I need this? // Do some more hard work here... } } // This method runs when a stop button is clicked, but as long // as -startIt is busy, a click on the stop button does nothing. - (IBAction)stopIt:(id)sender { NSLog(@"Stop it!"); running = NO; [progressbar stopAnimation:sender]; } I'm really new to Objective-C, Cocoa and applications with a UI. Thank you very much for any helpful answer.

    Read the article

  • JTextField vs JComboBox behaviour in JTable

    - by Ash
    Okay, this is a hard one to explain but I'll try my best. I have a JTextField and a JComboBox in a JTable, whose getCellEditor method has been overriden as follows: public TableCellEditor getCellEditor( int row, int column ) { if ( column == 3 ) { // m_table is the JTable if ( m_table.getSelectedRowCount() == 1 ) { JComboBox choices = new JComboBox(); choices.setEditable( true ); choices.addItem( new String( "item 1" ) ); return new DefaultCellEditor( choices ); } return super.getCellEditor( row, column ); } Here are the behavioral differences (NOTE that from this point on, when I say JTextField or JComboBox, I mean the CELL in the JTable containing either component): When I click once on a JTextField, the cell is highlighted. Double clicking brings up the caret and I can input text. Whereas, with a JComboBox, single clicking brings up the caret to input text, as well as the combo drop down button. When I tab or use the arrow keys to navigate to a JTextField and then start typing, the characters I type automatically get entered into the cell. Whereas, when I navigate to a JComboBox the same way and then start typing, nothing happens apart from the combo drop down button appearing. None of the characters I type get entered unless I hit F2 first. So here's my question: What do I need to do have JComboBoxes behave exactly like JTextFields in the two instances described above? Please do not ask why I'm doing what I'm doing or suggest alternatives (it's the way it is and I need to do it this way) and yes, I've read the API for all components in question....the problem is, it's a swing API. Thanks in advance, Ash

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Client Web Browser Behavior When Handling 301 Redirect

    - by Jon Swanson
    The RFC seems to suggest that the client should permanently cache the response: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. I'm having a hard time finding concrete browser documentation for any major browser that states how they handle these. I've started digging through the source code of firefox, but quickly got lost. Is the following scenario true for which (if any) browsers, and is there definitive documentation for either Firefox or IE that states as much?: First Time Around: 1.1: User enters link to site A, or clicks on a link directed at Site A 1.2: Browser interprets link at Site A, first time, no cache. Sends GET to Site A. 1.2: Site A responds with 301 Redirect to Site B 1.3: Browser sends GET to Site B. Any Subsequent Times Around: 2.2: User clicks on a link directed at Site A 2.2: Browser sees that, due to a past 301 redirect, Site A should now be Site B. 2.3: Without initiating any request whatsoever at Site A, browser initiates GET at Site B.

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Making a sqlite file stay existent between runs of the program

    - by Cocorico
    Hi! I'm having a problem with some sqlite code for an iPhone program in Xcode. I was opening my database like this: int result = sqlite3_open("stealtown.db", &database); Which is how they had it in a book I was looking at while I type the program. But then, that way of opening a database it only works when you run in simulator, not on device. So I finally figure out I need to do this: NSString *file = [[NSBundle mainBundle] pathForResource:@"stealtown" ofType:@"db"]; int result = sqlite3_open([file UTF8String], &database); And that works on device, EXCEPT one thing: Each time you launch the program, it starts as if you had never created the database, and when you stick an entry in the table, it's the ONLY entry in that table. When I used the first code on the simulator, I could open my program 6 times, each time adding 1 entry to a table, and at the end, I had 6 entries in that table. With the second code, I do exact same thing but each time there is only 1 entry in that table. Am I explaining this okay, I hope so, it's hard sometimes for me. Anyone maybe know why this would be?

    Read the article

  • Object addSubview only works in viewDidLoad

    - by DecodingSand
    Hi, I'm new to iPhone dev and need some help with adding subViews. I have a reusable object that I made that is stored in a separate .h .m and xib file. I would like to use this object in my main project's view controller. I have included the header and the assignment of the object generates no errors. I am able to load the object into my main project but can only do things with it inside my viewDidLoad method. I intend to have a few of these objects on my screen and am looking fora solution that is more robust then just hard wiring up multiple copies of the shape object. As soon as I try to access the object outside of the viewDidLoad it produces a variable unknown error - first use in this function. Here is my viewDidLoad method: shapeViewController *shapeView = [[shapeViewController alloc] initWithNibName:@"shapeViewController" bundle:nil]; [self.view addSubview: shapeView.view]; // This is the problem line // This code works changes the display on the shape object [shapeView updateDisplay:@"123456"]; ---- but the same code outside of the viewDidLoad generates the error. So to sum up, everything works except when I try to access the shapeView object in the rest of the methods. Thanks in advance

    Read the article

  • How can I visualise a "broken" hierarchical dataset?

    I have a reasonably large datatable structured something like this: StaffNo Grade Direct Boss2 Boss3 Boss4 Boss5 Boss6 ------- ----- ----- ----- ----- ----- ----- ----- 10001 1 10002 10002 10057 10094 10043 10099 10002 2 10057 NULL 10057 10094 10043 10099 10003 1 10004 10004 10057 10094 10043 10099 10004 2 10057 NULL 10057 10094 10043 10099 10057 3 10094 NULL NULL 10094 10043 10099 etc.... i.e. a unique id , their level (grade) in the hierarchy, a record of their bosses ID and the IDs of the supervisors above. (The 2,3,4, etc refers to the boss at that particular grade). The system relies on a strict hierarchy - if you are my boss (/parent) then your boss must be my grandparent. Unfortunately this rule is not enforced within the data model and the data ultimately comes from other systems which don't even know about the rule, let alone observe it. So you and I may share the same boss, but our bosses boss won't be the same. note: I cannot change the data model I cannot fix the data at source. So (for the moment) I have to fix the data once it's in place. Once a fortnight someone will do something which breaks the model and I'll need to modify the procs slightly to resolve. Not ideal, but I'm stuck with this for the next six months. Anyway, specific queries are easy to produce but I find it hard to keep track of the bigger picutre. The application which sits on this runs without complaint regardless but navigating around the system becoming extraordinarily confusing. So my question is: Can anyone recommend a tool (or technique) for generating some kind of "broken tree" diagram in this sort of circumstances? I don't want something that will fix things for me, or attempt statistical analysis but at least something that will give a visual indication of how broken it is at any one time. Note : At the moment this is in a SQL Server database but I'm open to ideas utilising C#, Perl or Python.

    Read the article

  • Multiple Concurrent Postbacks when using UpdatePanels

    - by d4nt
    Here's an example app that I built to demonstrate my problem. A single aspx page with the following on it: <form id="form1" runat="server"> <asp:ScriptManager runat="server" /> <asp:Button runat="server" ID="btnGo" Text="Go" OnClick="btnGo_Click" /> <asp:UpdatePanel runat="server"> <ContentTemplate> <asp:TextBox runat="server" ID="txtVal1" /> </ContentTemplate> </asp:UpdatePanel> </form> Then, in code behind, we have the following: protected void btnGo_Click(object sender, EventArgs e) { Thread.Sleep(5000); Debug.WriteLine(string.Format("{0}: {1}", DateTime.Now.ToString("HH:MM:ss.fffffff"), txtVal1.Text)); txtVal1.Text = ""; } If you run this and click on the "Go" button multiple times you will see multiple debug statements on the "Output" window showing that multiple requests have been processed. This appears to contradict the documented behaviour of update panels (i.e. If you make a request while one is processing, the first requests gets terminated and the current one is processed). Anyway, the point is I want to fix it. The obvious option would be to use Javascript to disable the button after the first press, but that strikes me as hard to maintain, we potentially have the same issue on a lot of screens it could be easily broken if someone renames a button. Do you have any suggestions? Perhaps there is something I could do in BeginRequest in Global.asax to detect a duplicate request? Is there some setting or feature on the UpdatePanel to stop it doing this, or maybe something in the AjaxControlToolkit that will prevent it?

    Read the article

  • Play framework does not return page and static content

    - by Anton
    I'm using play framework in production for one of my web projects. From time to time Play does not render main page or does not return some of the static content files. I have attached few screenshots below. First screenshot displays firebug console, loading of the site is stucked at the beginning, when serving home page. Second screenshot display fiddler console, when 2 static resources are not loading. This issue is hard to reproduce, it happens 1 of 15 time, I have to delete cache data and reload page. (pressing CRTL-F5 in FF). Issue can be reproduced in most of the browsers. Initially, I was thinging that there is something wrong with hosting provider. But I have changed hosting provided and issue has not gone. Version of the play is 1.2.2. Play is running as standalone server. Not sure, but probably deploying Play to Jetty/Tomcat/Resin would help. Also I'm thinging about rewriting application to another stack (well-known for me - j2ee, spring, whatever) I have no idea how to debug and resolve this issue. Any clue ? Has anyone faced same issue with Play before ?

    Read the article

  • How should I generate the partitions / pairs for the Chinese Postman problem?

    - by Simucal
    I'm working on a program for class that involves solving the Chinese Postman problem. Our assignment only requires us to write a program to solve it for a hard-coded graph but I'm attempting to solve it for the general case on my own. The part that is giving me trouble is generating the partitions of pairings for the odd vertices. For example, if I had the following labeled odd verticies in a graph: 1 2 3 4 5 6 I need to find all the possible pairings / partitions I can make with these vertices. I've figured out I'll have i paritions given: n = num of odd verticies k = n / 2 i = ((2k)(2k-1)(2k-2)...(k+1))/2 So, given the 6 odd verticies above, we will know that we need to generate i = 15 partitions. The 15 partions would look like: 1 2 3 4 5 6 1 2 3 5 4 6 1 2 3 6 4 5 ... 1 6 ... Then, for each partition, I take each pair and find the shortest distance between them and sum them for that partition. The partition with the total smallest distance between its pairs is selected, and I then double all the edges between the shortest path between the odd vertices (found in the selected partition). These represent the edges the postman will have to walk twice. At first I thought I had worked out an appropriate algorithm for generating these partitions / pairs but it is flawed. I found it wasn't a simple permutation/combination problem. Does anyone who has studied this problem before have any tips that can help point me in the right direction for generating these partitions?

    Read the article

  • How to update application files using patching?

    - by Marek
    I am not interested in any auto update solution, such as ClickOnce or the MS Updater Block. For anyone feeling the urge to ask why not: I am already using these and there is nothing wrong with them, I would just like to learn about any efficient alternatives. I would like to publish patches = small differences that will modify existing files of the deployment with the smallest possible delta. Not only code needs to be patched, but also resource files. Patching the running code can be accomplished by maintaining two separate synchronized copies of the deployment (no on the fly changes to the running executable are required). The application itself can be xcopy deployed (to avoid MSI auto-correcting the modified files or breaking ClickOnce signatures). I would like to learn how to handle different versions of patches (e.g. there is a patch issued that fixes one error and later another patch that fixes another error (in the same file) - users may have any combination of these and there comes a third patch - in text files, this may be easy to implement, but how about executable files? (native Win32 code vs. .NET, any difference?) If the first problem is too hard to solve or unsolvable for executables, I would like to at least learn if there is a solution that implements simple patching with serial revisions - in order to install revision 5, user must have all previous revisions installed to ensure validity of the deployment. Are there any existing solutions to accomplish this? NOTE: There are a few questions on SO that may seem like duplicates, but none with a good answer. This question is about the Windows platform, preferably .NET.

    Read the article

  • Core principles, rules, and habits for CS students

    - by Asad Butt
    No doubt there is a lot to read on blogs, in books, and on Stack Overflow, but can we identify some guidelines for CS students to use while studying? For me these are: Finish your course books early and read 4-5 times more material relative to your course work. Programming is the one of the fastest evolving professions. Follow the blogs on a daily basis for the latest updates, news, and technologies. Instead of relying on assignments and exams, do at least one extra, non-graded, small to medium-sized project for every programming course. Fight hard for internships or work placements even if they are unpaid, since 3 months of work 1 year at college. Practice everything, every possible and impossible way. Try doing every bit of your assignments project yourself; i.e. fight for every inch. Rely on documentation as the first source for help and samples, Google, and online forums as the last source. Participate often in online communities and forums to learn the best possible approach for every solution to your problem. (After doing your bit.) Make testing one of your habits as it is getting more important everyday in programming. Make writing one of your habits. Write something productive once or twice a week and publish it.

    Read the article

  • Is it possible to group records belonging to an entity in dbunit?

    - by Joshua
    Our JPA entity model auto-generates primary key identifiers for user, user_address tables. Would it be possible to group these entities given below via dbunit, so that I don't need to provide neither the primary key as well as the foreign key reference from user_address.user_id. It is getting very hard to maintain these keys (i.e. I would prefer to group the primary record 'user' and the child records 'user_address' so that dbunit can group them automatically by looking up the entity metadata). Is it achievable? <user id="1" first_name="Josh" creation_date="2009-07-11 15:45:28"/> <user_address id="1" user_id="1" address="Main St" city="Los Angeles"/> I would prefer something like this <!-- First user --> <user first_name="Josh" creation_date="2009-07-11 15:45:28"/> <user_address address="Main St" city="Los Angeles"/> <!-- Second user --> <user first_name="Mary" creation_date="2009-07-11 15:45:28"/> <user_address address="Taylors St" city="San Jose"/>

    Read the article

  • Practical refactoring using unit tests

    - by awhite
    Having just read the first four chapters of Refactoring: Improving the Design of Existing Code, I embarked on my first refactoring and almost immediately came to a roadblock. It stems from the requirement that before you begin refactoring, you should put unit tests around the legacy code. That allows you to be sure your refactoring didn't change what the original code did (only how it did it). So my first question is this: how do I unit-test a method in legacy code? How can I put a unit test around a 500 line (if I'm lucky) method that doesn't do just one task? It seems to me that I would have to refactor my legacy code just to make it unit-testable. Does anyone have any experience refactoring using unit tests? And, if so, do you have any practical examples you can share with me? My second question is somewhat hard to explain. Here's an example: I want to refactor a legacy method that populates an object from a database record. Wouldn't I have to write a unit test that compares an object retrieved using the old method, with an object retrieved using my refactored method? Otherwise, how would I know that my refactored method produces the same results as the old method? If that is true, then how long do I leave the old deprecated method in the source code? Do I just whack it after I test a few different records? Or, do I need to keep it around for a while in case I encounter a bug in my refactored code? Lastly, since a couple people have asked...the legacy code was originally written in VB6 and then ported to VB.NET with minimal architecture changes.

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >