Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 196/2416 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Looking for recommendations on OCR problem - tabular numeric data

    - by ldigas
    I have 20 pages of experiment measurement data which I need to digitalize. The results are in tabular form, scanned in 600 dpi resolution, and as far as scans go, they came up pretty clean and readable. For an example of how it looks see here (but beware: it is a rather big scan; about 5Mb; no problem for any broadband connection, but dialups should approach with caution!) ... and I need it finished by sunday afternoon (:-o) <-- smiley in a state of panic (then why did't you start sooner?)... yea, yeah ... I know ... but, it came up late, and I wasn't thinking I was gonna need this data also. So, I'm looking for recommendations. I haven't much experience with OCR programs, save scanning a page or two of pure text, but just to mention, I haven't the wish also to test out every OCR program out there. So this isn't a "name your OCR favourite". What I'm looking is advice from someone who's done something like that, and his/hers experience on what would be the best way to undertake. I need the data in txt form but since it will have to be checked (by drawing it, and just simply watching whether some points "jump out") I'll probably be entering it in Excel at first.

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • Western Digitial My Book: Can't access the data on the drive

    - by Bryan Denny
    My girlfriend has this external hard drive by Western Digital called a My Book. When the external drive is connected, it does not show it as an accessible disk drive on the computer. However, it shows up fine in Device Manager: I can also see it in Disk Management, but the volume is not mapped to a drive letter, nor can I change the drive letter: It only gives me access to Delete Volume: I would rather not lose the data on the drive if possible. What can I do from here to get to the data? Things I've tried/know: Uninstall drivers and re-install them Device does the same thing when attach to either her Win7 laptop or my Win8 laptop I don't think there's an issue with the HDD itself. No clicking noises, etc. I ran Western Digital Data LifeGuard Diangostics (DLGDIAG) and the SMART Status was a "PASS", all of the SMART Disk Information looked fine. I haven't had the time to run the diag tests yet but I do not believe it's a mechanical issue. The hard drive is inside of an enclosure, I have not attempted to pry the drive out yet. How can I get Windows to properly detect this drive?

    Read the article

  • Divide pivot table data by an arbitrary column in another table

    - by rsavu
    Hello all, I have this data from a pivot table: Countries P1 P2 Total Country 1 10 69 Country 2 36 2 92 Country 3 21 24 100 Country 4 22 77 Country 5 13 79 Country 6 12 1 48 Country 7 14 29 Country 8 22 1 46 Country 9 4 1 31 Country 10 16 7 120 Country 11 25 2 114 Country 12 8 11 68 Country 13 5 27 Country 14 11 3 23 Country 15 6 19 Country 16 33 79 Where: 1st column is the country name 2nd and 3rd column are the tickets introduced in the system 4th column is the total (disregard the data - total is not accurate) Additionally, I have another table that looks like this: Country P1 P2 Country 1 2 3 Country 2 2 2 Country 3 0 2 Country 4 0 3 Country 5 1 1 Country 6 2 2 Country 7 1 2 Country 8 3 3 Country 9 1 4 Country 10 2 1 Country 11 4 2 Country 12 2 1 Country 13 3 2 Country 14 3 3 Country 15 1 2 Country 16 2 2 Where the data represents the number of users of the application in each country. I want to be able to show the number of tickets submitted divided by the number of users in each country. Any ideeas how to do that? Thank you very much, Razvan

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Syncing two sheets, while being able to hide different data

    - by Joshua
    I'm pretty new to excel- so please bear with me. I have created a spreadsheet to organize gear by serial numbers and by who has it. This list is getting updated multiple times daily as gear shuffles regularly. I have gear that is assigned and unassigned. On the main sheet I have all the data, the way I want it to be organized. What I'm trying to do is duplicate this sheet, so that both sheets automatically keep the same data at all times, but on the first sheet I can hide all the unassigned gear, and view only the assigned gear, and then be able to narrow it down in groups using the hide function heavily. On the second sheet I want to be able to hide all of the assigned gear, and all the columns of gear that have no unassigned gear. End result will be that as gear is moved between individuals or is unassigned entirely, I make that adjustment on one sheet and the data stays the same on both, but the way I view that same sheet is different on both. If I'm making no sense just let me know and I'll try to explain again more clearly. Thanks

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Dropped WD External Harddisk, now it's shown as "Not initialized"

    - by Phelios
    So, the WD my passport external harddisk is dropped, and after that, the computer is unable to read it anymore. I was hopping if I can just find another case to try if the harddisk is still readable, but looks like the hard drive itself is not a normal SATA or PATA drive. I think it's modified. So, I can't find another case that I can try on. In the computer, I still can see the drive in the "Disk Management", but it's shown as Uninitialized, no size, and no drive letter. I've also tried a couple of recovery tools. Some can't detect at all, there is one (find and mount software) that can detect but shows 0 size. None of them can recover the data. WD is willing to replace it with a new one, but I still need to recover the data. Any way I can recover the data? UPDATE: I tried initialized it from the windows Disk Manager, but it give error "The request could not be performed because of an I/O device error."

    Read the article

  • SQL SERVER What is MDS? Master Data Services in Microsoft SQL Server 2008 R2

    What is MDS?Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Where in the filesystem should I store shared data?

    - by misterben
    Where in the unix filesystem is the conventional location to save non-user specific data, for example data shared via nfs or ftp, or backups? I could obviously create and use any arbitrary folder (such as /home/shared, /data or /var/data), but I'm really wondering if there are any "best" or "common" practice guidelines. The Filesystem Hierarchy Standard doesn't specify a location for shared data. For backups, I tend to use /var/backups, but as several cronjobs write to it should it really be left for their use?

    Read the article

  • Where in the filesystem should I store shared data?

    - by misterben
    Where in the unix filesystem is the conventional location to save non-user specific data, for example data shared via nfs or ftp, or backups? I could obviously create and use any arbitrary folder (such as /home/shared, /data or /var/data), but I'm really wondering if there are any "best" or "common" practice guidelines. The Filesystem Hierarchy Standard doesn't specify a location for shared data. For backups, I tend to use /var/backups, but as several cronjobs write to it should it really be left for their use?

    Read the article

  • Objective C - displaying data in NSTextView

    - by Leo
    Hi, I'm having difficulties displaying data in a TextView in iPhone programming. I'm analyzing incoming audio data (from the microphone). In order to do that, I create an object "analyzer" from my SignalAnalyzer class which performs analysis of the incoming data. What I would like to do is to display each new incoming data in a TextView in realtime. So when I push a button, I create the object "analyzer" whiwh analyze the incoming data. Each time there is new data, I need to display it on the screen in a TextView. My problem is that I'm getting an error because (I think) I'm trying to send a message to the parent class (the one taking care of displaying stuff in my TextView : it has a TexView instance variable linked in Interface Builder). What should I do in order to tell my parent class what it needs to display ? Or how sohould I design my classes to display automaticlally something ? Thank you for your help. PS : Here is my error : 2010-04-19 14:59:39.360 MyApp[1421:5003] void WebThreadLockFromAnyThread(), 0x14a890: Obtaining the web lock from a thread other than the main thread or the web thread. UIKit should not be called from a secondary thread. 2010-04-19 14:59:39.369 MyApp[1421:5003] bool _WebTryThreadLock(bool), 0x14a890: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Program received signal: “EXC_BAD_ACCESS”.

    Read the article

  • C++ smart pointers: sharing pointers vs. sharing data

    - by Eli Bendersky
    In this insightful article, one of the Qt programmers tries to explain the different kinds of smart pointers Qt implements. In the beginning, he makes a distinction between sharing data and sharing the pointers themselves: First, let’s get one thing straight: there’s a difference between sharing pointers and sharing data. When you share pointers, the value of the pointer and its lifetime is protected by the smart pointer class. In other words, the pointer is the invariant. However, the object that the pointer is pointing to is completely outside its control. We don’t know if the object is copiable or not, if it’s assignable or not. Now, sharing of data involves the smart pointer class knowing something about the data being shared. In fact, the whole point is that the data is being shared and we don’t care how. The fact that pointers are being used to share the data is irrelevant at this point. For example, you don’t really care how Qt tool classes are implicitly shared, do you? What matters to you is that they are shared (thus reducing memory consumption) and that they work as if they weren’t. Frankly, I just don't undersand this explanation. There was a clarification plea in the article comments, but I didn't find the author's explanation sufficient. If you do understand this, please explain. What is this distinction, and how are other shared pointer classes (i.e. from boost or the new C++ standards) fit into this taxonomy? Thanks in advance

    Read the article

  • Using Audio Queue Services to play PCM data over a socket connection

    - by Rohan
    I'm writing a remote desktop client for the iPhone and I'm trying to implement audio redirection. The client is connected to the server over a socket connection, and the server sends 32K chunks of PCM data at a time. I'm trying to use AQS to play the data and it plays the first two seconds (1 buffer worth). However, since the next chunk of data hasn't come in over the socket yet, the next AudioQueueBuffer is empty. When the data comes in, I fill the next available buffer with the data and enqueue it with AudioQueueEnqueueBuffer. However, it never plays these buffers. Does the queue stop playing if there are no buffers in the queue, even if you later add a buffer? Here's the relevant part of the code: void wave_out_write(STREAM s, uint16 tick, uint8 index) { if(items_in_queue == NUM_BUFFERS){ return; } if(!playState.busy){ OSStatus status; status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), NULL, 0, &playState.queue); if(status == 0){ for(int i=0; i<NUM_BUFFERS; i++){ AudioQueueAllocateBuffer(playState.queue, 40000, &playState.buffers[i]); } AudioQueueAddPropertyListener(playState.queue, kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, &playState); status = AudioQueueStart(playState.queue, NULL); if(status ==0){ playState.busy = True; } else{ return; } } else{ return; } } playState.buffers[queue_hi]->mAudioDataByteSize = s->size; memcpy(playState.buffers[queue_hi]->mAudioData, s->data, s->size); AudioQueueEnqueueBuffer(playState.queue, playState.buffers[queue_hi], 0, 0); queue_hi++; queue_hi = queue_hi % NUM_BUFFERS; items_in_queue++; } void AudioOutputCallback(void* inUserData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer) { PlayState *playState = (PlayState *)inUserData; items_in_queue--; } Thanks!

    Read the article

  • Using jstl tags in a dynamically created div

    - by George
    I want to be able to show some data based on criteria the user enters in a text field. I can easily take this data, process the form post, and show the data on another page. However, I want to be able to do it all on the same page - they click the button, and a new div shows up with the information. This doesn't seem too complicated, but I want to use jstl tags to format the data like: <c:forEach items="${model.data}" var="d"> <tr> <td><fmt:formatDate type="date" dateStyle="short" timeStyle="default" value="${d.reportDate}" /></td> <td><c:out value="${d.cardType}"/></td> </tr> </c:forEach> If jstl tags are processed when the page loads, can I use that in this new div? Can I update it via a javascript (using prototype) function to display the proper data? Will I be able to do the same thing if they change the criteria and click the submit button again?

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • ASIHTTPRequest POST splits up header + data?

    - by chris.o.
    Hi, I am using ASIHTTPRequest to POST data to a remote server on iPhone 4.2.1. When I make the following post request to our server, I get a 400 response (I removed the IP address): NSString dataString = @"data1=00&data2=00&data3=00"; ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:[NSString stringWithFormat:<ipremoved>]]]; [request appendPostData:[dataString dataUsingEncoding:NSUTF8StringEncoding]]; [request setRequestMethod:@"POST"]; [request addRequestHeader:@"User-Agent" value:@"iphone app"]; [request addRequestHeader:@"Content-Type" value:@"application/octet-stream"]; request.delegate = self; [request startAsynchronous]; When I send the same data using curl, I receive a 200 response: curl -H "User-Agent: iphone app" -H "Accept:" -H "Content-Type:application/octet-stream" --data-ascii "data1=00&data2=00&data3=00" --location <ipremoved> -v My colleague is stating that, in the failure case, the ASIHTTPRequest requires two socket reads: one for the header and one for the data. Apparently the server is not presently equipped to parse this correctly, so I am trying to work around it. If I setup a proxy between iPhone and my Mac and run Paros (to see packets), the problem goes away. Paros combine the header and data so that it is all acquired by the server in a single socket read. I've tried a few things suggested in other posts including disabling persistent connections, but I am not having any luck. I've also tried doing a ASIHTTPFormRequest, but the server does not like the generated data format. Any suggestions would be appreciated. Thanks.

    Read the article

  • Trend analysis using iterative value increments

    - by Dave Jarvis
    We have configured iReport to generate the following graph: The real data points are in blue, the trend line is green. The problems include: Too many data points for the trend line Trend line does not follow a Bezier curve (spline) The source of the problem is with the incrementer class. The incrementer is provided with the data points iteratively. There does not appear to be a way to get the set of data. The code that calculates the trend line looks as follows: import java.math.BigDecimal; import net.sf.jasperreports.engine.fill.*; /** * Used by an iReport variable to increment its average. */ public class MovingAverageIncrementer implements JRIncrementer { private BigDecimal average; private int incr = 0; /** * Instantiated by the MovingAverageIncrementerFactory class. */ public MovingAverageIncrementer() { } /** * Returns the newly incremented value, which is calculated by averaging * the previous value from the previous call to this method. * * @param jrFillVariable Unused. * @param object New data point to average. * @param abstractValueProvider Unused. * @return The newly incremented value. */ public Object increment( JRFillVariable jrFillVariable, Object object, AbstractValueProvider abstractValueProvider ) { BigDecimal value = new BigDecimal( ( ( Number )object ).doubleValue() ); // Average every 10 data points // if( incr % 10 == 0 ) { setAverage( ( value.add( getAverage() ).doubleValue() / 2.0 ) ); } incr++; return getAverage(); } /** * Changes the value that is the moving average. * @param average The new moving average value. */ private void setAverage( BigDecimal average ) { this.average = average; } /** * Returns the current moving average average. * @return Value used for plotting on a report. */ protected BigDecimal getAverage() { if( this.average == null ) { this.average = new BigDecimal( 0 ); } return this.average; } /** Helper method. */ private void setAverage( double d ) { setAverage( new BigDecimal( d ) ); } } How would you create a smoother and more accurate representation of the trend line?

    Read the article

  • search data from FileReader in Java

    - by maya
    hi I'm new in java how to read and search data from file (txt) and then display the data in TextArea or Jtable. for example I have file txt contains data and I need to display this data in textarea after I clicked a button, I have used FileReader , and t1 t2 tp are attributes in the file import java.io.FileReader; import java.io.IOException; String t1,t2,tp; Ffile f1= new Ffile(); FileReader fin = new FileReader("test2.txt"); Scanner src = new Scanner(fin); while (src.hasNext()) { t1 = src.next(); textarea.setText(t1); t2 = src.next(); textarea.setText(t2); tp = src.next(); textarea.setText(tp); f1.insert(t1,t2,tp); } fin.close(); also I have used the inputstream DataInputStream dis = null; String dbRecord = null; try { File f = new File("text2.text"); FileInputStream fis = new FileInputStream(f); BufferedInputStream bis = new BufferedInputStream(fis); dis = new DataInputStream while ( (dbRecord = dis.readLine()) != null) { StringTokenizer st = new StringTokenizer(dbRecord, ":"); String t1 = st.nextToken(); String t2 = st.nextToken(); String tp = st.nextToken(); textarea.setText(textarea.getText()+t1); textarea.setText(textarea.getText()+t2); textarea.setText(textarea.getText()+tp); } } catch (IOException e) { // catch io errors from FileInputStream or readLine() System.out.println("Uh oh, got an IOException error: " + e.getMessage()); } finally { } but both of them don't work ,so please any one help me I want to know how to read data and also search it from file and i need to display the data in textarea . thanks in advance

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >