Search Results

Search found 9545 results on 382 pages for 'least privilege'.

Page 44/382 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Software and/(x)or Hardware Projects for Pre-School Kids

    - by haylem
    I offered to participate at my kid's pre-school for various activities (yes, I'm crazy like that), and one of them is to help them discover extra-curricular (big word for a pre-school, but by lack of a better one... :)) hobbies, which may or may not relate to a professional activity. At first I thought that it wouldn't be really easy to have pre-schoolers relate to programming or the internal workings of a computer system in general (and I'm more used to teaching middle-school to university-level students), but then I thought there must be a way. So I'm trying to figure out ways to introduce very young kids (3yo) to computer systems in a fun and preferably educational way. Of course, I don't expect them to start smashing the stack for fun and profit right away (or at least not voluntarily, though I could use the occasion for some toddler tests...), but I'm confident there must be ways to get them interested in both: using the systems, becoming curious about understanding what they do, interacting with the systems to modify them. I guess this setting is not really relevant after all, it's pretty much the same as if you were aiming to achieve the same for your own kids at home. Ideas Considering we're talking 3yo pre-schoolers here, and that at this age some kids are already quite confident using a mouse (some even a keyboard, if not for typing, at least to press some buttons they've come to associate with actions) while others have not yet had any interaction with computers of any kind, it needs to be: rather basic, demonstrated and played with in less then 5 or 10 minutes, doable in in groups or alone, scalable and extendable in complexity to accommodate their varying abilities. The obvious options are: basic smallish games to play with, interactive systems like LOGO, Kojo, Squeak and clones (possibly even simpler than that), or thngs like Lego Systems. I guess it can be a thing to reflect on both at the software and the hardware levels: it could be done with a desktop or laptop machine, a tablet, a smartphone (or a crap-phone, for that matter, as long as you can modify it), or even get down to building something from scratch (Raspberry Pi and Arduino being popular options at the moment). I can probably be in the form of games, funny visualizations (which are pretty much games) w/ Prototype, virtual worlds to explore. I also thought on the moment (and I hope this won't offend anyone) that some approaches to teaching pets could work (reward systems, haptic feedback and such things could quickly point a kid in the right direction to understanding how things work, in a similar fashion - I'm not suggesting to shock the kids!). Hmm, Is There an Actual Question in There? What type of systems do you think might be a good fit, both in terms of hardware and software? Do you have seen such systems, or have anything in mind to work on? Are you aware of some research in this domain, with tangible results? Any input is welcome. It's not that I don't see options: there are tons, but I have a harder time pinpointing a more concrete and definite type of project/activity, so I figure some have valuable ideas or existing ones. Note: I am not advocating that every kid should learn to program, be interested in computer systems, or that all of them in a class would even care enough to follow such an introduction with more than a blank stare. I don't buy into the "everybody would benefit from learning to program" thing. Wouldn't hurt, but not necessary in any way. But if I can walk out of there with a few of them having smiled using the thing (or heck, cried because others took them away from them), that'd be good enough. Related Questions I've seen and that seem to complement what I'm looking for, but not exactly for the same age groups or with the same goals: Teaching Programming to Kids Recommendations for teaching kids math concepts & skills for programming?

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • WebLogic stuck thread protection

    - by doublep
    By default WebLogic kills stuck threads after 15 min (600 s), this is controlled by StuckThreadMaxTime parameter. However, I cannot find more details on how exactly "stuckness" is defined. Specifically: What is the point at which 15 min countdown begins. Request processing start? Last wait()-like method? Something else? Does this apply only to request-processing threads or to all threads? I.e. can a request-processing thread "escape" this protection by spawning a worker thread for a long task? Especially, can it delegate response writing to such a worker without 15 min countdown? My usecase is download of huge files through a permission system. Since a user needs to be authenticated and have permissions to view a file, I cannot (or at least don't know how) leave this to a simple HTTP server, e.g. Apache. And because files can be huge, download could (at least in theory) take more than 15 minutes.

    Read the article

  • AndroMDA maven code generation and JPA Annotations

    - by ArsenioM
    I am using the AndroMDA plugin for maven to generate code from an uml diagram made in MagicDraw. When the code is generated, AndroMDA desings the JPA annotation for the persitence layer. I think that at the compilation process AndroMDA uses Naming Strategies to determine the Table and Column names for the DataBase. I want to determine how AndroMDA desings this JPA annotations, because I need to display this DataBase names based on the UML entity and atributtes names. I was regarding if there is an API of AndroMDA that I could use to do this by giving it the uml diagram. Or at least, to know the Naming Strategies used by AndroMDA to achive that. AndroMDA at the compilation process design the JPA annotations for the Entities, Attributes, etc that are written in my java classes under a series of rules that exist within the EJB3 cartridge of AndroMDA. (The further Database is created using those JPA annotations). I want to create a program that returns me the same Table and Attributes names wrote on the JPA annotations, by giving it the .xml file of the uml diagram of a project. I was hoping that I could take advantage of the EJB3 cartridge to generate those Tables and Attribute names with my program. One way could be using an API of AndroMDA that do this(if it exits), or at least, by implementing the same rules used by the EJB3 cartridge for that matter. To be more illustrative, For example: If in my uml model I have an Entity called “CompanyGroup”, AndroMDA would generate the following code for the class definition: @javax.persistence.Entity @javax.persistence.Table(name = "COMPANY_GR") Public class CompanyGroup implements java.io.Serializable, Comparable< CompanyGroup This is just an example (not a real case), but nevertheless, the way how AndroMDA do the translation from “CompanyGroup” to “COMPANY_GR” has to be specified somewhere. Hope this explanation is useful enough. Thanks.

    Read the article

  • VS2010 - How to automatically stop compile on first compile error

    - by Ben Robbins
    {rant}First I'd like to say that this IS NOT A DUPLICATE. I've asked this question previously but it got closed as a duplicate when it isn't. This question is SPECIFIC to VS 2010 and the answers to the so-called duplicate work in VS 2008 but not in VS 2010 (at least not for me or anyone I know). So before you go closing something as a duplicate how about you read the question carefully and try the answer for yourself and see if it actually works. Apologies for the rant but there is no obvious way to contact the SO police that closed the issue or get it reopened. {/rant} At work we have a C# solution with over 80 projects. In VS 2008 we use a macro to stop the compile as soon as a project in the solution fails to build (see this question for several options for VS 2005 & VS 2008: http://stackoverflow.com/questions/134796/how-to-automatically-stop-visual-c-build-at-first-compile-error). Is it possible to do the same in VS 2010? What we have found is that in VS 2010 the macros don't work (at least I couldn't get them to work) as it appears that the environment events don't fire in VS 2010. The default behaviour is to continue as far as possible and display a list of errors in the error window. I'm happy for it to stop either as soon as an error is encountered (file-level) or as soon as a project fails to build (project-level). Answers for VS 2010 only please. If the macros do work then a detailed explanation of how to configure them for VS 2010 would be appreciated. Thanks.

    Read the article

  • Any task-control algorithms programming practices?

    - by NumberFour
    Hi, I was just wondering if there's any field which concerns the task-control programming (or at least that's the way I call it). For a better explanation of task-control consider the following scenario: An application (master-thread) waits for a command - which might be a particular action or a set of actions the application should perform. When a command is received the master-thread creates a task (= spawns an independent thread which actually does the action) and adds a record in it's task-list - thus keeping track of the time of execution, thread handle, task priority...etc. The master-thread awaits for any other incoming commands while taking care of all the tasks - e.g: kills tasks running too long, prioritizes tasks with higher priorities, kills a task on a request of another task, limits the number of currently running tasks, allows task scheduling, cleans finished tasks (threads) and so on. The model is pretty similar to what we can see in OS dealing with running processes. Are there any good practices programming such task-models or is there some theoretical work done in this field? Maybe my question is too generalized, but at least I wanted to know whether there are any experiences working on such models or if there's a better approach. Thanks for any answers.

    Read the article

  • JSON-RPC and Json-rpc service discovery specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to some actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 as is may be not enough, at least for frameworks like Dojo. Or am I wrong? Questions: Would implementation of JSON-RPC 1.0 specifications be enough to provide JSON-RPC service for most of modern clients, and how many clients there (if at-all) that actually support beyond JSON-RPC 1.0 capabilities (SMD, Schema, 2.0)? Because it looks like that JSON-RPC 1.0 is only one that has official specifications (and not draft) If I should implement SMD, or it is recommended can somebody point to official, most recent specifications of Json Schema and Service Mapping Description or links I found are really "the specifications?" Are JSON-RPC 2.0, SMD and JSON-Schema drafts stable enough to implement them? Note: do not suggest existing JSON-RPC service implementations. Anybody?

    Read the article

  • Full JSON-RPC specifications

    - by Artyom
    Hello, I'm going to implement JSON-PRC web service. I need specifications for this. So far I had found only one resource that can be called as real specifications: JSON-RPC 1.0 http://json-rpc.org/wiki/specification Proposal of JSON-PRC 2.0: http://groups.google.com/group/json-rpc/web/json-rpc-2-0 (why is it on google groups?) However I've seen that JavaScript frameworks like Dojo actively use JSON-RPC SMD Service Mapping Description proposal But it requires JSON Schema specifications, but it redirects to incorrect URL as reference. So far I had found following: http://tools.ietf.org/html/draft-zyp-json-schema-02 And it is still draft... Can anybody point me to spome actual specifications... At least something official updated? Because it looks like that implementing JSON-RPC 1.0 and 2.0 would not be enought, at least for frameworks like Dojo. Or am I wrong? Questions: Is it enough to implement JSON-RPC 1.0 specifications and 2.0 draft to be on safe side, would this work for most JSON-RPC clients? If I should implement SMD, or it is recommended can somebody point to official specifications of Json Schema and Service Mapping Description or links I found are really "specifications?" Note: do not suggest existing JSON-RPC service implementations.

    Read the article

  • WCF selfhosted service, installer class and netsh

    - by jeho
    I have a selfhosted WCF service application which I want to deploy by a msi installer package. The endpoint uses http port 8888. In order to startup the project under windows 2008 after installation I have to either run the program as administrator or have to edit the http settings with netsh: "netsh http add urlacl url=http://+:8888/ user=\Everyone" I want to edit the http settings from my installer class. Therefore I call the following method from the Install() method: public void ModifyHttpSettings() { string parameter = @"http add urlacl url=http://+:8888/ user=\Everyone"; System.Diagnostics.ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo("netsh", parameter); psi.Verb = "runas"; psi.RedirectStandardOutput = false; psi.CreateNoWindow = true; psi.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; psi.UseShellExecute = false; System.Diagnostics.Process.Start(psi); } This method will work for english versions of windows, but not for localized versions (The group Everyone has different names in localized versions). I have also tried to use Environment.UserName to allow access at least for the current logged on user. But this does also not work, because the installer class is run by the msi service which runs under the user SYSTEM. Hence Enviroment.UserName returns SYSTEM and that is not what I want. Is there a way to grant access to all (or at least for the current logged on) user to my selfhosted WCF service from a msi installer class?

    Read the article

  • Do the ‘up to date’ guarantees for values of Java's final fields extend to indirect references?

    - by mattbh
    The Java language spec defines semantics of final fields in section 17.5: The usage model for final fields is a simple one. Set the final fields for an object in that object's constructor. Do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. My question is - does the 'up-to-date' guarantee extend to the contents of nested arrays, and nested objects? An example scenario: Thread A constructs a HashMap of ArrayLists, then assigns the HashMap to final field 'myFinal' in an instance of class 'MyClass' Thread B sees a (non-synchronized) reference to the MyClass instance and reads 'myFinal', and accesses and reads the contents of one of the ArrayLists In this scenario, are the members of the ArrayList as seen by Thread B guaranteed to be at least as up to date as they were when MyClass's constructor completed? I'm looking for clarification of the semantics of the Java Memory Model and language spec, rather than alternative solutions like synchronization. My dream answer would be a yes or no, with a reference to the relevant text.

    Read the article

  • Mixing NIO with IO

    - by Steffen Heil
    Hi Usually you have a single bound tcp port and several connections on these. At least there are usually more connections as bound ports. My case is different: I want to bind a lot of ports and usually have no (or at least very few) connections. So I want to use NIO to accept the incoming connections. However, I need to pass the accepted connections to the existing jsch ssh library. That requires IO sockets instead of NIO sockets, it spawns one (or two) thread(s) per connection. But that's fine for me. Now, I thought that the following lines would deliver the very same result: Socket a = serverSocketChannel.accept().socket(); Socket b = serverSocketChannel.socket().accep(); SocketChannel channel = serverSocketChannel.accpet(); channel.configureBlocking( true ); Socket c = channel.socket(); Socket d = serverSocket.accept(); However the getInputStream() and getOutputStream() functions of the returned sockets seem to work different. Only if the socket was accepted using the last call, jsch can work with it. In the first three cases, it fails (and I am sorry: I don't know why). So is there a way to convert such a socket? Regards, Steffen

    Read the article

  • Jasper Reports and iReport issue

    - by William
    I am having an issue with JasperReports I can not solve. I am using Eclipse, OpenReports 3.2 and IReport 3.7 The issue I am having is that the report does nothing. When I preview the report in IReport I can at least get a "Document has no pages" message but when I try to open it using OpenReports it doesn't do anything. I get the open reports header and the copyright message but nothing between them. I was able to track it down to line 150 in ReportRunAction.java in OpenReports. That line is: jasperPrint = jasperEngine.fillReport(reportInput); At least that is the line the page dies on. I can't swear that the issue isn't that parameter. Through looking around all I have been able to find is something about how the report needs to be compiled with the same version of the jasperreports.jar that OpenReports uses. I have no idea how to tell if/what version of jasper reports is being bundled into the .jasper file though. Is that my problem? If so how do I tell/set the version of the jar that gets bundled? If not; help!

    Read the article

  • iPhone JSON object releasing itself?

    - by MidnightLightning
    I'm using the JSON Framework addon for iPhone's Objective-C to catch a JSON object that's an array of Dictionary-style objects via HTTP. Here's my connectionDidFinishLoading function: - (void)connectionDidFinishLoading:(NSURLConnection *)connection { [connection release]; NSString *responseString = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; [loadingIndicator stopAnimating]; NSArray *responseArray = [responseString JSONValue]; // Grab the JSON array of dictionaries NSLog(@"Response Array: %@", responseArray); if ([responseArray respondsToSelector:@selector(count)]) { NSLog(@"Returned %@ items", [responseArray count]); } [responseArray release]; [responseString release]; } The issue is that the code is throwing a EXC_BAD_ACCESS error on the second NSLog line. The EXC_BAD_ACCESS error I think indicates that the variable got released from memory, but the first NSLog command works just fine (and shows that the data is all there); it seems that only when calling the count message is causing the error, but the respondsToSelector call at least thinks that the responseArray should be able to respond to that message. When running with the debugger, it crashes on that second line, but the stack shows that the responseArray object is still defined, and has 12 objects in it (so the debugger at least is able to get an accurate count of the contents of that variable). Is this a problem with the JSON framework's creation of that NSArray, or is there something wrong with my code?

    Read the article

  • ASP MVC Ajax Controller pattern?

    - by Kevin Won
    My MVC app tends to have a lot of ajax calls (via JQuery.get()). It's sort of bugging me that my controller is littered with many tiny methods that get called via ajax. It seems to me to be sort of breaking the MVC pattern a bit--the controller is now being more of a data access component then a URI router. I refactored so that I have my 'true' controller for a page just performing standard routing responses (returing ActionResponse objects). So a call to /home/ will obviously kick up the HomeController class that will respond in the canonical controller fashion by returning a plain-jane View. I then moved my ajax stuff into a new controller class whose name I'm prefacing with 'Ajax'. So, for example, my page might have three different sections of functionality (say shopping cart or user account). I have an ajax controller for each of these (AjaxCartController, AjaxAccountController). There is really nothing different about moving the ajax call stuff into its own class--it's just to keep things cleaner. on client side obviously the JQuery would then use this new controller thusly: //jquery pseudocode call to specific controller that just handles ajax calls $.get('AjaxAccount/Details'.... (1) is there a better pattern in MVC for responding to ajax calls? (2) It seems to me that the MVC model is a bit leaky when it comes to ajax--it's not really 'controlling' stuff. It just happens to be the best and least painful way of handling ajax calls (or am I ignorant)? In other words, the 'Controller' abstraction doesn't seem to play nice with Ajax (at least from a patterns perspective). Is there something I'm missing?

    Read the article

  • Transition from 2D to 3D later in game development

    - by Axarydax
    Hi, I'd like to work on a game, but for rapidly prototyping it, I'd like to keep it as simple as possible, so I'd do everything in top-down 2D in GDI+ and WinForms (hey, I like them!), so I can concentrate on the logic and architecture of the game itself. I thinking about having the whole game logic (server) in one assembly, where the WinForms app would be a client to that game, and if/when the time is right, I'd write a 3D client. I am tempted to use XNA, but I haven't really looked into it, so I don't know if it won't take too much time getting up to speed - I really don't want to spent much time doing other stuff than the game logic, at least while I have the inspiration. But I wouldn't have to abandon everything and transfer to new platform when transitioning from 2D to 3D. Another idea is just to get over it and learn XNA/Unity/SDL/something at least to that level so I can make the same 2D version as I could in GDI+, and I won't have to worry about switching frameworks anymore. Let's just say that the game is the kind where you watch a dude from behind, you run around the gameworld and interact with objects. So the bird's eye perspective could be doable for now. Thanks.

    Read the article

  • Getting browser to make an AJAX call ASAP, while page is still loading

    - by Chris
    I'm looking for tips on how to get the browser to kick off an AJAX call as soon as possible into processing a web page, ideally before the page is fully downloaded. Here's my approximate motivation: I have a web page that takes, say, 5 seconds to load. It calls a web service that takes, say, 10 seconds to load. If loading the page and calling the web service happened sequentially, the user would have to wait 15 seconds to see all the information. However, if I can get the web service call started before the 5 second page load is complete, then at least some of the work can happened in parallel. Ideally I'd like as much of the work to happen in parallel as possible. My initial theory was that I should place the AJAX-calling javascript as high up as possible in the web page HTML source (being mindful of putting it after the jquery.js include, because I'm making the call using jquery ajax). I'm also being mindful not to wrap the AJAX call in a jquery ready event handler. (I mention this because ready events are popular in a lot of jquery example code.) However, the AJAX call still doesn't seem to get kicked off as early as I'm hoping (at least as judged by the Google Chrome "Timeline" feature), so I'm wondering what other considerations apply. One thing that might potentially be detrimental is that the AJAX call is back to the same web server that's serving the original web page, so I might be in danger of hitting a browser limit on the # of HTTP connections back to that one machine? (The HTML page loads a number of images, css files, etc..)

    Read the article

  • To (monkey)patch or not to (monkey)patch, that is the question

    - by gsakkis
    I was talking to a colleague about one rather unexpected/undesired behavior of some package we use. Although there is an easy fix (or at least workaround) on our end without any apparent side effect, he strongly suggested extending the relevant code by hard patching it and posting the patch upstream, hopefully to be accepted at some point in the future. In fact we maintain patches against specific versions of several packages that are applied automatically on each new build. The main argument is that this is the right thing to do, as opposed to an "ugly" workaround or a fragile monkey patch. On the other hand, I favor practicality over purity and my general rule of thumb is that "no patch" "monkey patch" "hard patch", at least for anything other than a (critical) bug fix. So I'm wondering if there is a consensus on when it's better to (hard) patch, monkey patch or just try to work around a third party package that doesn't do exactly what one would like. Does it have mainly to do with the reason for the patch (e.g. fixing a bug, modifying behavior, adding missing feature), the given package (size, complexity, maturity, developer responsiveness), something else or there are no general rules and one should decide on a case-by-case basis ?

    Read the article

  • jquery form validation: validation script specified externally

    - by Abu Hamzah
    i have a jquery form validation in the master page and it works fine and i got that working from this article: http://www.dotnetcurry.com/ShowArticle.aspx?ID=310 my question is: if i place the .js to external and add a reference to my page then its not working... it says object expected here is how i have done: in my content page (i am using master page, asp.net ) add in my content page: <script src="myform_validation.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { ValidateMe(this); }); </script> below is in the external .js file: function ValidateMe() { $("#aspnetForm").validate({ rules: { <%=TextBox1.UniqueID %>: { maxlength:1, //minlength: 12, required: true }, <%=TextBox2.UniqueID %>: { minlength: 12, required: true }, <%=TextBox3.UniqueID %>: { minlength: 12, required: true }//, // }, messages: { <%=TextBox1.UniqueID %>: { required: "Enter your firstname", minlength: jQuery.format("Enter at least {0} characters") }, <%=TextBox2.UniqueID %>: { required: "Please enter a valid email address", minlength: "Please enter a valid email address" } , <%=TextBox3.UniqueID %>: { required: "Enter your firstname", minlength: jQuery.format("Enter at least {0} characters") } } , success: function(label) { // set &nbsp; as text for IE label.html("&nbsp;").addClass("checked"); } }); } ;

    Read the article

  • Help a C# developer understand: What is a monad?

    - by Charlie Flowers
    There is a lot of talk about monads these days. I have read a few articles / blog posts, but I can't go far enough with their examples to fully grasp the concept. The reason is that monads are a functional language concept, and thus the examples are in languages I haven't worked with (since I haven't used a functional language in depth). I can't grasp the syntax deeply enough to follow the articles fully ... but I can tell there's something worth understanding there. However, I know C# pretty well, including lambda expressions and other functional features. I know C# only has a subset of functional features, and so maybe monads can't be expressed in C#. However, surely it is possible to convey the concept? At least I hope so. Maybe you can present a C# example as a foundation, and then describe what a C# developer would wish he could do from there but can't because the language lacks functional programming features. This would be fantastic, because it would convey the intent and benefits of monads. So here's my question: What is the best explanation you can give of monads to a C# 3 developer? Thanks! (EDIT: By the way, I know there are at least 3 "what is a monad" questions already on SO. However, I face the same problem with them ... so this question is needed imo, because of the C#-developer focus. Thanks.)

    Read the article

  • cfchart ignores my scalefrom value

    - by Monte Chan
    Hi all, I have the following codes in my page. The style variable holds the custom style. <cfchart chartheight="450" chartwidth="550" gridlines="9" yaxistitle="Score" scalefrom="20" scaleto="100" style="#style#" format="png" > <cfchartseries query="variables.chart_query" type="scatter" seriescolor="##000000" itemcolumn="MyItem" valuecolumn="MyScore"/> </cfchart> Before I begin, please go to http://www.monteandjanicechan.com/chart_good.jpg. This is how I want my report to come up. On the x-axis, there will always be three items as long as at least one of them has values. If an item does not have any values (i.e. 2010), there would not be a marker in the chart. The problem occurs only when only one item has value. Please see http://www.monteandjanicechan.com/chart_bad.jpg. As you can see, 2008 and 2010 do not have any values; y-axis is now scaled from 0 to 100. I have tried setting one of the items (ex. 2008) a value of 0 or something off the chart; it would scale according to this off-the-chart value and the 2009 value. In short, I have to have at least two items with values between 20 and 100 in order for cfchart to scale from 20 to 100. My question is, how can I correct the issue so that cfchart would ALWAYS scale from 20 to 100? I am running CF9. Thanks in advance, Monte

    Read the article

  • Viewstate in a .ashx Handler?

    - by Matt Dawdy
    I've got a handler (list.ashx for example) that has a method that retrieves a large dataset, then grabs only the records that will be shown on any given "page" of data. We are allowing the users to do sorting on these results. So, on any given page run, I will be retrieving a dataset that I just got a few seconds/minutes ago, but reordering them, or showing the next page of data, etc. My point is that my dataset really hasn't changed. Normally, the dataset would be stuck into the viewstate of a page, but since I'm using a handler, I don't have that convenience. At least I don't think so. So, what is a common way to store the viewstate associated with a current user's given page when using a handler? Is there a way to take the dataset, encode it somehow and send that back to the user, and then on the next call, pass it back and then rehydrate a dataset from those bits? I don't think Session would be a good place to store it since we might have 1000 users all viewing different datasets of different data, and that could bring the server to its knees. At least I think so. Does anyone have any experience with this kind of situation, and can you give me any advice?

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >