Search Results

Search found 11362 results on 455 pages for 'big o analysis'.

Page 25/455 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Algorithm for sentence analysis and tokenization

    - by Andrea Nagar
    I need to analyze a document and compile statistics as to how many times each a sequence of words is used (so the analysis is not on single words but of batch of recurring words). I read that compression algorithms do something similar to what I want - creating dictionaries of blocks of text with a piece of information reporting its frequency. It should be something similar to http://www.codeproject.com/KB/recipes/Patterns.aspx Do you have anything written in C#?

    Read the article

  • C# code analysis - VS 2005

    - by anon
    I have a C# user control project which causes intermittent .NET run time error, a generic error, and wondering if there is any code analysis tool that I can point at my .sln file which would tell me what may be causing my error

    Read the article

  • Big square ads appear in lower right corner of both IE and Chrome

    - by BrianK
    In both IE and Chrome, large ads appear in the lower right corner of the browser window. Sometime they look reputable like for Microsoft, but sometimes they are big flashing boxes that say "You have won". Right now I am looking at "Need to lose 30 lbs?" I ran Microsofot Security Essentials and it didn't find anything. I then ran Windows Defender Offline (boot from CD). WDO found five things lincluding browser hijack that caused the wrong page to appear after clicking a link. It reported that it cleaned successfully, after which I ran a quick scan to confirm. After rebooting I still see the ads. Do I still have an infection? Any other tools to try? What about ComboFix? Thanks Update: Here's a screenshot - on superuser

    Read the article

  • Why is Latex so big?

    - by putmatrix
    On windows, MikTex comes in a DVD. It's several times bigger than a typical Linux distribution. This makes it impossible to carry Latex on a memory stick like I do with many other useful software. Why is it so big? I thought it was just a language or system, but I've never seen any programming language with gigabytes of libraries. It's just that there's a bad feeling when your Latex distribution takes up four gigabytes of space when you expect it to be more of, say, 200mb.

    Read the article

  • Handout export to word from PowerPoint are too big :(

    - by nickjohn
    EDITED i am using power point lectures. i want to mail merge speaker data into the respective lecture. now thats not possible with ppt as far i know, so i have to convert these lectures to handout by using power point option "publishMS word handouts" and use word mail merger. this is good since it will keep the comments/notes added in slides in handouts aswell. but these exported handouts in word remain actual slides and retain link to original ppt rather than simply get exported as images. so the file size gets verrry big 10mb ppt = 212mb doc=88mb docx Is there any option to convert handouts exported from power point to word as images? i simply cant save them as pngs from powerpoint since that will not include the comments data. Thanks

    Read the article

  • Big Excel File Freezing/Running Slowly

    - by ktm5124
    Hi, My co-worker has a very large Excel file (over 7 MB) that suffers from the problems of (A) running slowly (B) taking forever to open/save/close and (C) freezing the computer, requiring a restart. I set the calculations to Manual, and I repaired the file, but the file didn't change in file size and it is still having these problems. My questions are: (1) Is there any way around this problem or is Excel just bad at handling ~7MB files? (2) Would upgrading RAM make a big difference? (3) It's possible that we can't afford to spend the money on a RAM upgrade. Are there are any other ways of mitigating the problem? Thanks.

    Read the article

  • Passing big multi-dimensional array to function in C

    - by kirbuchi
    Hi, I'm having trouble passing a big array to a function in C. I declare: int image[height][width][3]={}; where height and width can be as big as 1500. And when I call: foo((void *)image,height,width); which is declared as follows: int *foo(const int *inputImage, int h, int w); I get segmentation fault error. What's strange is that if my values are: height=1200; width=290; theres no problem, but when they're: height=1200; width=291; i get the mentioned error. At 4 bytes per integer with both height and width of 1500 (absolute worst case) the array size would be of 27MB which imo isn't that big and shouldn't really matter because I'm only passing a pointer to the first element of the array. Any advice?

    Read the article

  • Fastest way to find sum of digits on big numbers

    - by dada
    I have some big numbers (again) and i need to find if the sum of the digits is an even number. I tried this: finding the sum of the digits with a while loop and then checking if that sum % 2 equals 0 and it's working but it's too slow for big numbers, because i am given intervals of numbers and if the input is 1999999 19999999999 then my program fails, i cannot complete within the time limit which is 0,1 sec. What to do ? Is there any other faster way to do this ? EDIT: The input 1999999 19999999999 means it will start with 1999999 and check all the numbers like i wrote above until 19999999999, and because we are talking about big numbers (< 2^30) my program is not worthy.

    Read the article

  • big size of database-log SQL-Server 2008

    - by t.kehl
    I have a database which is running under Microsoft SQL Server 2008. Now, I have seen, that the log of the database (ldf-file) is growing to big size. The database-file (mdf) has a size of 630MB and the log-file has a size of 12GB. I ask me now, what the reason for this can be. Is there a tool which let me seeing into the log where I can see, what is the reason for this big size? What can I do to prevent that the log is growing to this big size?

    Read the article

  • Is catching general exceptions really a bad thing?

    - by Bob Horn
    I typically agree with most code analysis warnings, and I try to adhere to them. However, I'm having a harder time with this one: CA1031: Do not catch general exception types I understand the rationale for this rule. But, in practice, if I want to take the same action regardless of the exception thrown, why would I handle each one specifically? Furthermore, if I handle specific exceptions, what if the code I'm calling changes to throw a new exception in the future? Now I have to change my code to handle that new exception. Whereas if I simply caught Exception my code doesn't have to change. For example, if Foo calls Bar, and Foo needs to stop processing regardless of the type of exception thrown by Bar, is there any advantage in being specific about the type of exception I'm catching?

    Read the article

  • Why use sealed instead of static on a class?

    - by sq33G
    Our system has several utility classes. Some people on our team use (A) a class with all-static methods and a private constructor. Others use (B) a class with all-static methods (these the juniors). On code analysis, (A) and (B) raise warning CA1052, which recommends marking the class as sealed. Included in the MSDN documentation there is the following advice: If you are targeting .NET Framework 2.0 or earlier, a better approach is to mark the type as static. Why does this make any sense? I would have thought the opposite; AFAIK, previous to 2.0 there was no way to mark a class as static.

    Read the article

  • Development methodology for single web developer?

    - by CaseTA
    I'm a web developer who mostly works with the LAMP stack when it comes to my own projects. Most of the time I just start coding on a project and fixing bugs and adding features as I go along. Often I'll try to use an existing solution such as Wordpress or Drupal. Now that I'm thinking of creating my own web application with businesses as the target group, I feel there's a need for proper analysis and design. Something lightweight for a one person project and still solid enough to handle requirements, user interfaces, security, etc. If you could recommend methodologies and literature I would be grateful.

    Read the article

  • big speed difference on a network link with and without VPN tunnel

    - by xirtyllo
    Scenario: We have a network link between two offices. The link is provided by a third party company through a VLAN on their network, but to us it is totally transparent -as if we had a simple ethernet cable going from one location to the other-. We have one router at each side of the link, with 3 VPN tunnels in between the two. The test: When I test the speed of the network link with the routers in place, with one laptop directly connected to the router on each side, I consistently get ~30/35Mbps. But if I take out the routers and I test the link connecting the laptops directly to the ethernet cable at each side, I consistently get ~85/88Mbps. It's quite a big performance hit, and I would tend to think that the VPN tunnels are responsible for the slow down. Is it normal that this configuration (two routers with three VPN tunnels between them) takes away so much bandwidth? More info: The encryption algorithm used for the VPN tunnels is AES128. The routers model is Zyxel USG200 and Zyxel USG1000, and their CPU, memory, and storage use is well within normal limits. The nominal bandwidth of the network link is 100Mbps. The network link in question is supplied by a third party company (the building in between our two offices). Basically it passes through their network as a VLAN, but the VLAN is completely transparent to us (e.g. no configuration required on our side, just like one single cable from end to end). Unfortunately (or maybe fortunately) I cannot directly test different routers configurations as I'm not the person in charge of it.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • Window too big to fit the screen!

    - by syockit
    I'm using Windows 7 on a 8.9' monitor with 1280x768 screen resolution. Using the might of arithmetics, I'm able to determine that my dpi (actually ppi) should be 167. Win7 is really helpful in that it doesn't have to restart to apply new dpi settings, unlike its predecessors (though I'd rather it applies straight away). The problem with small monitors in Windows is that when you come across windows too big to fit the screen, you can't move the title bar far above it. In X window managers I used in the past, you could alt-drag the window to anywhere you want, but in Windows, even if you alt-space and select move, it will automatically push the window back until the title bar is visible. I'm looking for a solution that either: allows me to move window freely without regard to titlebar visibility, or attach a scrollbar to existing window, or EDIT: create virtual desktops that allow me to span windows over 2 desktops, or EDIT 2: allow me to set larger virtual resolution, then pan & scan. EDIT 3: I found some progs that might do some of the above: 1) AltDrag allows me to drag, resize using alt and left/right mouse button. Neat! Best solution so far. 2) GiMeSpace Desktop Extender is supposed to allow me to scroll desktop. Didn't work. The other new version, GiMeSpace Ultimate Taskbar worked, but it destroys my Superbar, replacing it with its map.

    Read the article

  • a question on using Microsoft Excel to do data analysis

    - by Gemma
    Hello everyone there, Me and my mates have been given an Excel spreasheet,which contains the data of the Census report. The column here corresponds to "Year" (from 2000-2010),the row means "City/Town",the cell is simply the population of a town at a particular year. We are up to doing some analysis like “what town had its first population gain in 10 years?” etc.My question is just about can we do this in Excel,or do we need to export the data to other database(SQL) then do the programming? Thanks in advance.

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Best ORM, Simple data Structures, Strong Query analysis.

    - by sayth
    What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data. I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db. Which ORM suits storing many numerical types and have strong query functions? I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.

    Read the article

  • Converting float values from big endian to little endian

    - by Bobby
    Is it possible to convert floats from big to little endian? I have a value from a PowerPC(big endian)platform that I am send via TCP to a Windows process (little endian). This value is a float, but when I "memcpy" the value into a win32 float type and then call _byteswap_ulongon that value, I always get 0.0000? What am I doing wrong?

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • What threading analysis tools do you recommend?

    - by glutz78
    My primary IDE is Visual Studio 2005 and I have a large C/C++ project. I'm interested in what thread analysis tools are recommended. By that I mean, I want a tool, static or dynamic, to help find race conditions, deadlocks, and the like. So far I've casually researched the following: 1. Intel Thread Checker: I don't believe that it ties into VS 2005? 2. Valgrind/Helgrind: free. 3. Coverity: this is a costly tool if i understand correctly. Anyone have experience with any of these or other? I'd much appreciate any advice. Thank you.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >