Search Results

Search found 2906 results on 117 pages for 'reporting'.

Page 82/117 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • how to match a regulas expresion like (%i1) in python pexpect

    - by mike
    I want to use maxima from python using pexpect, whenever maxima starts it will print a bunch of stuff of this form: $ maxima Maxima 5.27.0 http://maxima.sourceforge.net using Lisp SBCL 1.0.57-1.fc17 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) i would like to start up pexpect like so: import pexpect cmd = 'maxima' child = pexpect.spawn(cmd) child.expect (' match all that stuff up to and including (%i1)') child.sendline ('integrate(sin(x),x)') chil.expect( match (%o1 ) ) print child.before how do i match the starting banner up to the prompt (%i1)? and so on, also maxima increments the (%i1)'s by one as the session goes along, so the next expect would be: child.expect ('match (%i2)') child.sendline ('integrate(sin(x),x)') chil.expect( match (%o2 ) ) print child.before how do i match the (incrementing) integers?

    Read the article

  • Looking for some thoughts on an image printing app

    - by Alex
    Hey All, Im looking for thoughts/advice. I have an upcoming project (all .net) that will require the following: pulls data once a day from an online service provider based on certain criteria. saves data locally for reference and reporting the data thats pulled will be used to create gift cards. So after the data is loaded, a process will run to generate "virtual cards" and send them to a network printer. Once printed, the system will updated the local data recording a successful or failed print. My initial thought was to create a windows service to pull the data...but then I couldnt decide how I was going to put a "virtual card" together and get it to print. Then I considered doing it as a WPF app. I figure that will give me access to the graphics and printing ability. Maybe neither of these are the right direction....Any ideas or thoughts would be greatly appreciated. Alex

    Read the article

  • Rendering PDFs from a database inside MVC views?

    - by Mohammad Sepahvand
    I was wondering if it's possible to do this without using 3rd party compnents in MVC 3. (I am open to free components though.) There are a couple of links out there but they seem to be mostly concerned with reporting and other code samples that do claim to do this sort of thing don't seem to compile. I'm not having any trouble saving and retrieving the PDFs to and from my database, but when I return the PDF as a File or a FileStreamResult the user is prompted with a download. A more desirable approach would be to actually render the PDFs inside the browser. I've had a look at iTextSHarp, it does the job to an extent, but it's not a complete solution. For example it will display the PDF inside the view if and only if the client has Adobe Reader installed, otherwise it prompts for a download. So technically, I'm mostly looking for a PDF viewer. Any ideas?

    Read the article

  • Excel VBAa: Sum invoice by client id with copying result to new worksheet

    - by Melkior
    Hi, i have strange problem doing reporting: i have numerous clients with different issued invoices. Problem comes to the point when there are invoices in minus and plus: Column A consists of client unique IDs, Column B invoice number, column C invoice amount A | B | C 0010019991 | 1800149471 | 162.00 0010019991 | 1800136388 | 162.00 0010019991 | 1600008004 | -36.00 0010021791 | 1800132148 | 162.00 0010021791 | 1800145436 | 162.00 0010021791 | 1600007737 | -12.00 0014066147 | 1800119068 | 1,684.80 0014066147 | 1800123702 | 1,684.80 0014066147 | 1600007980 | -1,300.80 0014066147 | 1600007719 | -1,286.40 I need to remove rows with negative invoices in a way that amount is summed with invoices which are not with negative amount. So that final result would look like: A | B | C 0010019991 | 1800149471 | 126.00 0010019991 | 1800136388 | 162.00 0010021791 | 1800132148 | 150.00 0010021791 | 1800145436 | 162.00 0014066147 | 1800123702 | 782.40

    Read the article

  • Bad temperature sensors on Foxconn motherboard?

    - by Gawain
    I have a system with a Foxconn V400 series motherboard and AMD Athlon 3000+ processor. Ever since I got it a few years ago the fans (particularly the CPU fan) have been really loud. So recently I installed SpeedFan to see why they were running so fast. SpeedFan reported the CPU temperature to be 32C, and one motherboard sensor at about 26C. But the other two motherboard sensors were reporting 78C and 64C respectively. Naturally the fans were both maxed out because of this, with the CPU fan at 5800rpm and the case fan at 2400rpm. I opened the case and everything inside was literally cool to the touch, with the exception of the CPU heatsink which was slightly warm, but nowhere near 78C. It seems like the temperature sensors are either defective or being read incorrectly. Is there some way I can decrease my fan noise without risking damage to my processor? Some way to ignore those two temp sensors? Any help would be greatly appreciated.

    Read the article

  • Data Integration/EAI Project Lessons Learned

    - by Greg Harman
    Have you worked on a significant data or application integration project? I'm interested in hearing what worked for you and what didn't and how that affected the project both during and after implementation (i.e. during ongoing operation, maintenance and expansion). In addition to these lessons learned, please describe the project by including a quick overview of: The data sources and targets. Specifics are not necessary, but I'd like to know general technology categories e.g. RDBMS table, application accessed via a proprietary socket protocol, web service, reporting tool. The overall architecture of the project as related to data flows. Different human roles in the project (was this all done by one engineer? Did it include analysts with a particular expertise?) Any third-party products utilized, commercial or open source.

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • If you have an application localized in pt-br and pt-pt, what language you should choose if the syst

    - by Sorin Sbarnea
    If you have an application localized in pt-br and pt-pt, what language you should choose if the system is reporting only pt code (generic Portuguese)? This question is independent of the nature of the application, desktop, mobile or browser based. Let's assume you are not able to get region information from another source and you have to choose one language as the default one. The question does apply as well for more case including: * pt-pt and pt-br * en-us and en-gb * fr-fr and fr-CA * zh-cn, zh-tw, .... - in fact in this case I know that zh can be used as predominant language for Simplified Chinese where full code is zh-hans. For zh-tw, zh-hant-tw, zh-hk, zh-mo the proper code (canonical) should be zh-hant. In fact the question can be extended to: How to I determine the predominant languages for a specified meta-language? I need a solution that will include at least Portuguese, English and French.

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • PCAP Web Service Usage Logging for Dummies

    - by nick
    I've been assigned the task (for work) of working with PCAP for the first time in my life. I've read through the tutorials and have hacked together a real simple capture program which, it turns out, isn't that hard. However, making use of the data is more difficult. My goal is to log incomming and outgoing web service requests. Are there libraries (C or C++) that stitch together the packets from PCAP that would make reporting on this simple? Baring that is there something short of reading all of the RFC's from soup to nuts that will allow me to have an "ah-ha!" moment (all of the tutorials seem to stop at the raw packet level which isn't useful for me)? It looks like PERL has a library that may do this and I may eventually attempt a reverse engineer from PERL. NOTE BENE: Web Server logs aren't acceptable here as I will be intercepting on a routing device. If I had access to those I'd be done and happy...I don't.

    Read the article

  • Subversion and project management web based super tool. Like Team Foundation Server but not TFS.

    - by Rob Stevenson-Leggett
    Hi, We're currently looking at an IT upgrade and I'm after recommendations for a tool which can do some or all of the following. SVN management (authz, web viewer, commit log, diff) Create template projects (1 click e.g. create me a microsite with this name in svn and give these people access) Reporting on code churn, time spent on tasks on a per project basis User story management Basically like Team Foundation Server but that integrates with SVN properly (reason for this - we have a wide range of skill sets and not everyone can use a TFS client). Is there a combination of Trac plugins + something that can create trac instances (a la Dreamhost's admin panel) that can acheive this. On a side note, does anyone have any experience of version controlling designery type files - e.g. PSDs, Illustrator files. Any advice at all appreciated. Cheers, Rob

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • WCF Windows service permissions problem

    - by Elad
    I have created a WCF service and hosted it using Windows Services host. To install the project I created an installation project (as described here). In the tutorial, it says to define in the ProjectInstaller.cs the serviceProcessInstaller1 Account property to be Network Service. When using this setting the service did not started on the server. When I tried to start the process manually, it immediately return to stopped state. After when I changed the Account to LocalSystem the service works properly. My questions are: Any ideas why it won't work with Network Service account? What are the security implications of using a server with LocalSystem account? This server is used locally in the intranet as a reporting server for other servers.

    Read the article

  • Crystal Reports Legends

    - by AWinters
    Is there a way to force a Bar Chart legend in Crystal Report 11.5 to display its objects in a particular order? For Example, say I am reporting on the consumption of "Bananas" and "Apples" by State. The Bar Chart should display the percentage of people who eat these fruits by county (Percent Bar Chart). The "Apples" percentage always displays on top of the bar chart and the "Bananas" on the bottom. The legend for this graph also displays the "Apple" color first, then the "Banana" color. However, if the "Banana" percentage is 0% the legend displays the "Banana" color first on the legend. This creates a inconsistent report (with plenty of complaints). I would like the "Banana" color to always display second in the legend. Hope I didn't confuse anyone and any ideas would be helpful.

    Read the article

  • Separating weakly linked database schemas

    - by jldugger
    I've been tasked with revisiting a database schema we designed and use internally for various ticketing and reporting systems. Currently there exists about 40 tables in one Oracle database schema supporting perhaps six webapps. However, there's one unifying relationship amongst them all: a rooms table describing the room. Room name, purpose and other data are thrown into a shared table for each app. My initial idea was to pull each of these applications into a separate database, and perform joins between a given database and the room database. But I've discovered this solution prevents foreign key constraints in SQL Server 2005. It seems silly to duplicate one table for each app and keep those multiple copies synchronized. Should I just leave everything in one large DB, or is there something else I can do separate the tables without losing FK constraints?

    Read the article

  • counting fields based on group in crystal report

    - by hatem gamil
    hi all i wana ask a question about crystal reporting in vs 2008 lets say i have a report with these data customer_ID Customer_Name Order_amoont Order_Date (#group1 VipCustomer) 1 xyz 3 1/1/2010 2 abc 4 2/2/2010 5 sds 21 3/12/2009 (#Group2 NormalCustomer) 3 tyt 2 3/3/2010 4 ha 4 21/3/2009 i want only to display records where Order_Date year is in 2010 only so i went to the section expert and i added a condintion in suppress formula Year(order_Date)=2010 and i get the result ,,the question is how to count how many vip customers ordered in 2010 only and how many normal customer order in 2010 only ,,then i want the total number of both type of customers to be displayed to have a report like that:: customer_ID Customer_Name Order_amoont Order_Date (#group1 VipCustomer) 1 xyz 3 1/1/2010 2 abc 4 2/2/2010 subtotal 2 (#Group2 NormalCustomer) 3 tyt 2 3/3/2010 subtotal 1 total 3 thnx

    Read the article

  • How do I normalise this database design?

    - by Ian Roke
    I am creating a rowing reporting and statistics system for a client where I have a structure at the moment similar to the following: ----------------------------------------------------------------------------- | ID | Team | Coaches | Rowers | Event | Position | Time | ----------------------------------------------------------------------------- | 18 | TeamName | CoachName1 | RowerName1 | EventName | 1 | 01:32:34 | | | | CoachName2 | RowerName2 | | | | | | | | RowerName3 | | | | | | | | RowerName4 | | | | ----------------------------------------------------------------------------- This is an example row of data but I would like to expand this out to a Rowers table and Coaches table and so on but I don't know how best to then link that back to the Entries table which is what this is. Has anybody got any words of wisdom they could share with me? Update A Team can have any number of Coaches and Rowers, a Rower can be in many Teams (Team A, B, C etc) and a Team can have many Coaches.

    Read the article

  • How to control Time zone formatting in System.Xml.Serialization or during application execution?

    - by Beal
    I'm developing a C# .Net Application that is executing on a system located in the Central Time Zone. The application gets information from a third party using an API they provide. I have used the WSDL to produce the code that my application access the API with...their reporting API allows you to define a start date and end date for the report. These are C# DateTime fields and XSD:dateTime. Now when I set the start date and end dates and allow the API to create the SOAP messages the dates don't always include a Time Zone unless I set the date fields using the ToLocalTime method; however, the method will create the DateTime fields in the Central Time Zone (CST) but I need to have it create these fields in the Pacific Time Zone (PST). If I set my machine time to PST all is good...but of course that causes other time issues. What methods can I use to control the formatting of the DateTime? Alternatively, is there a application setting that can be set in C# that allows timezone control?

    Read the article

  • wxWidgets errors occur after upgrading gDEBugger

    - by acekiller
    all. Today I upgraded my gDEBugger (though I don't think it involves gDEBugger) to the latest version but problem occurs. When I tried to open gDEBugger, an alert window named "wxWidgets Debug Alert" pop-up, reporting that "....\src\common\xpmdecod.cpp(822):assert "i==colors_cnt" failed in wxXPMDecoder::ReadData(). Call stack: [00]wxConsole....balabala....", like follows. All these words seem just like warnings and didn't affect the following work, however I am wondering why this problem occurs? What's the root cause? I am not familiar with wxWidgets and hopes those guru on it can help me resolve it.

    Read the article

  • What are some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • ILClone on Windows 2000

    - by 00010000
    Does anyone know of any issues with the ILClone() function on Windows 2000? Is it fully supported? MSDN says it runs on Windows 2000 but I have a user reporting that my program will not run on Windows 2000 because of that function. EDIT: I was able to get a hold of a Win2K system and I can confirm the issue. Shell32.dll version installed is 5.0.3700.6705. The error message shown when running the program is: The procedure entry point ILClone could not be located in the dynamic link library SHELL32.DLL

    Read the article

  • Excel VBA: Sum invoice by client id with copying result to new worksheet

    - by Melkior
    Hi, i have strange problem doing reporting: i have numerous clients with different issued invoices. Problem comes to the point when there are invoices in minus and plus: Column A consists of client unique IDs, Column B invoice number, column C invoice amount A | B | C 0010019991 | 1800149471 | 162.00 | 2010-03-12 0010019991 | 1800136388 | 162.00 | 2010-02-12 0010019991 | 1600008004 | -36.00 | 2010-03-15 0010021791 | 1800132148 | 162.00 | 2010-03-12 0010021791 | 1800145436 | 162.00 | 2010-02-12 0010021791 | 1600007737 | -12.00 | 2010-03-15 0014066147 | 1800119068 | 1,684.80 | 2010-03-12 0014066147 | 1800123702 | 1,684.80 | 2010-02-12 0014066147 | 1600007980 | -1,300.80 | 2010-02-15 0014066147 | 1600007719 | -1,286.40 | 2010-03-15 I need to remove rows with negative invoices in a way that amount is summed with invoices which are not with negative amount. So that final result would look like: A | B | C | D 0010019991 | 1800149471 | 126.00 | 2010-03-12 0010019991 | 1800136388 | 162.00 | 2010-02-12 0010021791 | 1800132148 | 150.00 | 2010-03-12 0010021791 | 1800145436 | 162.00 | 2010-02-12 0014066147 | 1800123702 | 782.40 | 2010-02-12

    Read the article

  • Minimum needs to Deploy SQL Server Integration Services 2008

    - by Tim
    Hi, I would like to run SSIS 2008 packages on a server that does not have SQL Server 2008 installed on it. I have a simple package to test the concept, but it fails to execute. The return code is 9020 which I have not seen listed as a return code elsewhere. I have copied the following files to the test server that does not have SQL Server 2008 installed on it: SelfContainedSample.dtsConfig Package.dtsx DTExec.exe I am attempting to run the package using a batch file. The line in the batch file that runs the package is: "%dtexecloc%\dtexec.exe" /FILE "%packagefolder%\Package.dtsx" /CONFIGFILE "%configfolder\SelfContainedSample.dtsConfig" /CHECKPOINTING OFF /REPORTING E %logfile% set rc=%errorlevel% I am wondering if there are other requirements that need to be satified to run an SSIS 2008 package on a server that does not have SQL Server 2008 on it? .NET Runtime? SSIS 2008 runtime? Please share your advice if you have a solution or have met this issue before. Thanks, Tim

    Read the article

  • How can I get JavaDoc into a JunitReport?

    - by benklaasen
    Hi - I'm a tester, with some Java and plenty of bash coding experience. My team is building an automated functional test harness using JUnit 4 and ant. Testers write automated tests in Java and use JavaDoc to document these tests. We're using ant's JunitReport task to generate our test result reports. This works superbly for reporting. What we're missing, however, is a way to combine those JavaDoc free-text descriptions of what the test does along with the JunitReport results. My question is, what's involved to get the JavaDoc into the JunitReport output? I'd like to be able to inject the JavaDoc for a given test method into the JunitReport at the level of each method result. regards Ben

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >