Search Results

Search found 2585 results on 104 pages for 'forensic analysis'.

Page 90/104 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • google maps api keys to be set webserver-wide, (as env var? inside apache?)

    - by ~knb
    I have a web site with many virtual hosts and each registered with several domain names (ending in .org, .de), site1.mysite.de, site2.mysite.org Then I have different templating systems based on several programming languages (perl and php) in use on the web server. The Google Maps Api requires a unique Google Maps api key for each vhost. I want to have something like a web-server wide variable $goomapkey that I can call from inside my code. In PHP code, Now I have a kludgy case-analysis solution like $domain = substr($_SERVER['SERVER_NAME'], -3); if (".de" == $domain){ //if ("xxxxxx" eq substr($ENV{SERVER_NAME}, 0, 5)){ // $gookey = "ABQIAAA..."; //} else { //site1.de $gookey = "ABQIAAAA1Js..."; //} } elseif ("dev" == substr($_SERVER['SERVER_NAME'], 0, 3)){ //dev.mysite.org $gookey = "ABQIAAAA1JsSb..."; } else { //www.mysite.org $gookey = "ABQIAAAA1JsS..."; //TODO: Add more keys for each virtual host, for my.machinename.de, IP-address based URL, ... } ... inside my php-based CMS. A non-ideal solution, because it is, php-only, and I still have to set it at several html templates inside the CMS, and there are too many cases. I want the google maps api key to be set by the apache web server who examines the request *early in the request loop before any php page template code is constructed and evaluated. is an environment variable a good solution? which technology should be used to set the $goomapkey variable? I'd prefer mod_perl2 Apache request handler, but the documentation is confusing (many API changes in the past ). Which Apache module could I use? Is there a built-in Apache module that does the same thing?

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Organising UI code in .NET forms

    - by sb3700
    Hi I'm someone who has taught myself programming, and haven't had any formal training in .NET programming. A while back, I started C# in order to develop a GUI program to control sensors, and the project has blossomed. I was just wondering how best to organise the code, particularly UI code, in my forms. My forms currently are a mess, or at least seem a mess to me. I have a constructor which initialises all the parameters and creates events. I have a giant State property, which updates the Enabled state of all my form control as users progress through the application (ie: disconnected, connected, setup, scanning) controlled by a States enum. I have 3-10 private variables accessed through properties, some of which have side-effects in changing the values of form elements. I have a lot of "UpdateXXX" functions to handle UI elements that depend on other UI elements - ie: if a sensor is changed, then change the baud rate drop down list. They are separated into regions I have a lot of events calling these Update functions I have a background worker which does all the scanning and analysis. My problem is this seems like a mess, particularly the State property, and is getting unmaintainable. Also, my application logic code and UI code are in the same file and to some degree, intermingled which seems wrong and means I need to do a lot of scrolling to find what I need. How do you structure your .net forms? Thanks

    Read the article

  • What are salesforce.com and Apex like as an application development platform?

    - by mhollers
    I have recently discovered that salesforce.com is much more than an online CRM after coming across a Morrison's Case Study in which they develop a works management application. I've been trying it out with a view to recreating our own Works Management system on the platform. My background is in Microsoft and .Net, and the obvious 1st choice would be asp.net. However, there's only really myself with .net experience and my manager with a more legacy Synergy programming background, and I am self taught and am looking at evaluating other RAD options (eg Ironspeed). the nature of the business is in the main 2-5 concurrent construction type contracts that run for 3-5 yrs each, each requiring 15-50 system users. Traditionally we have used our character based Works Mangement system for everything and tweaked it for each contract. The Salesforce licensing model on the face of it suits this sort of flexibilty, but I'm worried about the development flexibilty/learning curve and all the issues that surround lock-in. There doesn't seem to be much neutral sober analysis of the platform on the web that isn't salesforce's own material/blogs Has anyone any experience of developing an application on salesforce as compared to the more 'traditional' .Net route?

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • Unknown ListView Behavior

    - by st0le
    I'm currently making a SMS Application in Android, the following is a code snippet from Inbox Listactivity, I have requested a cursor from the contentresolver and used a custom adapter to add custom views into the list. Now, in the custom view i've got 2 TextViews (tvFullBody,*tvBody*)... tvFullBody contains the Full SMS Text while tvBody contains a short preview (35 characters) The tvFullBody Visibility is by default set to GONE. My idea is, when the user clicks on a list item, the tvBody should dissappear(GONE) and the tvFullBody should become visible (VISIBLE). On Clicking again, it should revert back to its original state. //isExpanded is a BitSet of the size = no of list items...keeps track of which items are expanded and which are not @Override protected void onListItemClick(ListView l, View v, int position, long id) { if(isExpanded.get(position)) { v.findViewById(R.id.tvFullBody).setVisibility(View.GONE); v.findViewById(R.id.tvBody).setVisibility(View.VISIBLE); }else { v.findViewById(R.id.tvFullBody).setVisibility(View.VISIBLE); v.findViewById(R.id.tvBody).setVisibility(View.GONE); } isExpanded.flip(position); super.onListItemClick(l, v, position, id); } The Code works as it is supposed to :) except for an undesired sideeffect.... Every 10th (or so) List Item also gets "toggled". eg. If i Expand the 1st, then the 11th, 21th list items are also expanded...Although they remain off screen, but on scrolling you get to see the undesired "expansion". By my novice analysis, i'm guessing Listview keeps track of 10 list items that are currently visible, upon scrolling, it "reuses" those same variables, which is causing this problem...(i didn't check the android source code yet.) I'd be gratefull for any suggestion, on how i should tackle this! :) I'm open to alternative methods aswell....Thanks in advance! :)

    Read the article

  • Looking for: nosql (redis/mongodb) based event logging for Django

    - by Parand
    I'm looking for a flexible event logging platform to store both pre-defined (username, ip address) and non-pre-defined (can be generated as needed by any piece of code) events for Django. I'm currently doing some of this with log files, but it ends up requiring various analysis scripts and ends up in a DB anyway, so I'm considering throwing it immediately into a nosql store such as MongoDB or Redis. The idea is to be easily able to query, for example, which ip address the user most commonly comes from, whether the user has ever performed some action, lookup the outcome for a specific event, etc. Is there something that already does this? If not, I'm thinking of this: The "event" is a dictionary attached to the request object. Middleware fills in various pieces (username, ip, sql timing), code fills in the rest as needed. After the request is served a post-request hook drops the event into mongodb/redis, normalizing various fields (eg. incrementing the username:ip address counter) and dropping the rest in as is. Words of wisdom / pointers to code that does some/all of this would be appreciated.

    Read the article

  • complex regular expression task

    - by Don Don
    Hi, What regular expressions do I need to extract section title(s) in a text file? So, in the following sample text, I'd like to extract "Communication and Leadership" "1.Self-Knowledge" "2. Humility" "(3) Clear Thinking". Many thanks. Communication and Leadership True leaders understand that, rather than forcing their followers into a preconceived mold, their job is to motivate and organize followers to collectively accomplish goals that are in everyone's interests. The ability to communicate this to co-workers and followers is critical to the effectiveness of leadership. 1.Self-Knowledge Superior leaders are able to devote their skills and energies to leadership of a group because they have worked through personal issues to the point where they know themselves thoroughly. A high level of self-knowledge is a prerequisite to effective communication skills, because the things that you communicate as a leader are coming from within. 2. Humility This subversion of personal preference requires a certain level of humility. Although popular definitions of leaders do not always see them as humble, the most effective leaders actually are. This humility may not be expressed in self-effacement, but in a total commitment to the goals of the organization. Humility requires an understanding of one's own relative unimportance in comparison to larger systems. (3) Clear Thinking Clarity of thinking translates into clarity of communication. A leader whose goals or personal analysis is muddled will tend to deliver unclear or ambiguous directions to followers, leading to confusion and dissatisfaction. A leader with a clear mind who is not ambivalent about her purposes will communicate what needs to be done in a s traightforward and unmistakable manner.

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • What is an appropriate way to separate lifecycle events in the logging system?

    - by Hanno Fietz
    I have an application with many different parts, it runs on OSGi, so there's the bundle lifecycles, there's a number of message processors and plugin components that all can die, can be started and stopped, have their setup changed etc. I want a way to get a good picture of the current system status, what components are up, which have problems, how long they have been running for etc. I think that logging, especially in combination with custom appenders (I'm using log4j), is a good part of the solution and does help ad-hoc analysis as well as live monitoring. Normally, I would classify lifecycle events as INFO level, but what I really want is to have them separate from what else is going on in INFO. I could create my own level, LIFECYCLE. The lifecycle events happen in various different areas and on various levels in the application hierarchy, also they happen in the same areas as other events that I want to separate them from. I could introduce some common lifecycle management and use that to distinguish the events from others. For instance, all components that have a lifecycle could implement a particular interface and I log by its name. Are there good examples of how this is done elsewhere? What are considerations?

    Read the article

  • short-cutting equality checking in F#?

    - by John Clements
    In F#, the equality operator (=) is generally extensional, rather than intensional. That's great! Unfortunately, it appears to me that F# does not use pointer equality to short-cut these extensional comparisons. For instance, this code: type Z = MT | NMT of Z ref // create a Z: let a = ref MT // make it point to itself: a := NMT a // check to see whether it's equal to itself: printf "a = a: %A\n" (a = a) ... gives me a big fat segmentation fault[*], despite the fact that 'a' and 'a' both evaluate to the same reference. That's not so great. Other functional languages (e.g. PLT Scheme) get this right, using pointer comparisons conservatively, to return 'true' when it can be determined using a pointer comparison. So: I'll accept the fact that F#'s equality operator doesn't use short-cutting; is there some way to perform an intensional (pointer-based) equality check? The (==) operator is not defined on my types, and I'd love it if someone could tell me that it's available somehow. Or tell me that I'm wrong in my analysis of the situation: I'd love that, too... [*] That would probably be a stack overflow on Windows; there are things about Mono that I'm not that fond of...

    Read the article

  • Ways to access a 32bit DLL from a 64bit exe

    - by bufferz
    I have a project that must be compiled and run in 64 bit mode. Unfortunately, I am required to call upon a DLL that is only available in 32 bit mode, so there's no way I can house everything in a 1 Visual Studio project. I am working to find the best way to wrap the 32 bit DLL in its own exe/service and issue remote (although on the same machine) calls to that exe/service from my 64 bit app. My OS is Win7 Pro 64 bit. The required calls to this 32 bit process are several dozen per second, but low data volume. This is a realtime image analysis application so response time is critical despite low volume. Lots of sending/receiving single primitives. Ideally, I would host a WCF service to house this DLL, but in a 64 bit OS one cannot force the service to run as x86! Source. That is really unfortunate since I timed function calls to the WCF service to be only 4ms on my machine. I have experimented with named pipes is .net. I found them to be 40-50 times slower than WCF (unusable for me). Any other options or suggestions for the best way to approach my puzzle?

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • JSF getter methods called BEFORE beforePhase fires

    - by Bill Leeper
    I got a recommendation to put all data lookups in the beforePhase for a given page, however, now that I am doing some deeper analysis it appears that some getter methods are being called before the beforePhase is fired. It became very obvious when I added support for a url parameter and I was getting NPEs on objects that are initialized in the beforePhase call. Any thoughts? Something I have set wrong. I have this in my JSP page: <f:view beforePhase="#{someController.beforePhaseSummary}"> That is only the 5th line in the JSP file and is right after the taglibs. Here is the code that is in the beforePhaseSummary method: public void beforePhaseSummary(PhaseEvent event) { logger.debug("Fired Before Phase Summary: " + event.getPhaseId()); if (event.getPhaseId() == PhaseId.RENDER_RESPONSE) { HttpServletRequest request = (HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getRequest(); if (request.getParameter("application_id") != null) { loadApplication(Long.parseLong(request.getParameter("application_id"))); } /* Do data fetches here */ } } The logging output above indicates that an event is fired. The servlet request is used to capture the url parameters. The data fetches gather data. However, the logging output is below: 2010-04-23 13:44:46,968 [http-8080-4] DEBUG ...SomeController 61 - Get Permit 2010-04-23 13:44:46,968 [http-8080-4] DEBUG ...SomeController 107 - Getting UnsubmittedCount 2010-04-23 13:44:46,984 [http-8080-4] DEBUG ...SomeController 61 - Get Permit 2010-04-23 13:44:47,031 [http-8080-4] DEBUG ...SomeController 133 - Fired Before Phase Summary: RENDER_RESPONSE(6) The logs indicate 2 calls to the getPermit method and one to getUnsubmittedCount before the beforePhase is fired.

    Read the article

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • What's the steps for SQL optimization and changes without reflect live system ?

    - by Space Cracker
    we have a big portal that build using SharePoint 2007 , asp.net 3.5 , SQL Server 2005 .. many developers work in it since 01/2008 and we are now doing huge analysis for current SQL Databases [not share-point DB ] to optimize and enhance it. The main db have about 330 table and 1720 stored procedure (SP) created from 01/2008 till now Many table names / Columns is very long and we want to short it we found SP names is written in 25 format :( , some of them are very complex and also we want to rename many SP parameters need to be renamed one of the biggest table is Registered user table, that will be spitted in more than one table for some optimization, many columns name will be changed I searched for the way that i can rename table names ,columns and i found SQL refactor tool but i still trying it .. my questions : Is SQl Refactor is the best tool for renaming ? or is there any other one ? if i want to make it manually, is there any references or best practice for that ? How can i do such changes in fast and stable way .. i search for recommendations and case studies if exist ?

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Embedded Development Board

    - by ALF3130
    I'm new to the embedded development world and am looking to get my very first board. After some research, I realize that there aren't many choices with FPUs. This is important in my project as I'm going to be doing quite a bit of floating point computations. I found the Mini2440 which seems to run on the ARM920T core. This particular unit is perfect for my needs (decent price, all the right I/O ports, and a touch screen to boot) but it seems that it doesn't have an FPU. I don't know how big of a penalty I'd be paying for FP emulation, so I'm unsure of whether to pull the trigger on this one. That said: Can someone please confirm whether this product (Mini2440) has an FPU or not? My project will do image capture and analysis. Does anyone have any experience with running things like OpenMP on such platforms? Please suggest any other similar boards in the = $200 price range that have an FPU. This world is new to me. Any other advice or things I should be aware of is much appreciated.

    Read the article

  • What is a good solution to log the deletion of a row in MySQL?

    - by hobodave
    Background I am currently logging deletion of rows from my tickets table at the application level. When a user deletes a ticket the following SQL is executed: INSERT INTO alert_log (user_id, priority, priorityName, timestamp, message) VALUES (9, 4, 'WARN', NOW(), "TICKET: David A. deleted ticket #6 from Foo"); Please do not offer schema suggestions for the alert_log table. Fields: user_id - User id of the logged in user performing the deletion priority - Always 4 priorityName - Always 'WARN' timestamp - Always NOW() message - Format: "[NAMESPACE]: [FullName] deleted ticket #[TicketId] from [CompanyName]" NAMESPACE - Always TICKET FullName - Full name of user identified by user_id above TicketId - Primary key ID of the ticket being deleted CompanyName - Ticket has a Company via tickets.company_id Situation/Questions Obviously this solution does not work if a ticket is deleted manually from the mysql command line client. However, now I need to. The issues I'm having are as follows: Should I use a PROCEDURE, FUNCTION, or TRIGGER? -- Analysis: TRIGGER - I don't think this will work because I can't pass parameters to it, and it would trigger when my application deleted the row too. PROCEDURE or FUNCTION - Not sure. Should I return the number of deleted rows? If so, that would require a FUNCTION right? How should I account for the absence of a logged in user? -- Possibilities: Using either a PROC or FUNC, require the invoker to pass in a valid user_id Require the user to pass in a string with the name Use the CURRENT_USER - meh Hard code the FullName to just be "Database Administrator" Could the name be an optional parameter? I'm rather green when it comes to sprocs. Assuming I went with the PROC/FUNC approach, is it possible to outright restrict regular DELETE calls to this table, yet still allow users to call this PROC/FUNC to do the deletion for them? Ideally the solution is usable by my application as well, so that my code is DRY.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >