Search Results

Search found 13757 results on 551 pages for 'decimal format'.

Page 298/551 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Dealing with SerializationExceptions in C#

    - by Tony
    I get a SerializationException: (...) is not marked as serializable. error in the following code: [Serializable] public class Wind { public MyText text; public Size MSize; public Point MLocation; public int MNumber; /.../ } [Serializable] public class MyText { public string MString; public Font MFont; public StringFormat StrFormat; public float MySize; public Color FColor, SColor, TColor; public bool IsRequest; public decimal MWide; /.../ } and the List to be serialized: List<Wind> MyList = new List<Wind>(); Code Snippet: FileStream FS = new FileStream(AppDomain.CurrentDomain.BaseDirectory + "Sticks.dat", FileMode.Create); BinaryFormatter BF = new BinaryFormatter(); BF.Serialize(FS, MyList); FS.Close(); throws an Exception: System.Runtime.Serialization.SerializationException was unhandled Message="Type 'System.Drawing.StringFormat' in Assembly 'System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not marked as serializable." How do I solve this problem?

    Read the article

  • Webservice error

    - by Parth
    i am new to webservices, I have created a basic stockmarket webservice, I have successfully created the server script for it and placed it in my server, Now I also creted a clent script and accessed it hruogh the same server.. Is it valid ? can boh files be accesed from the same server? or do I have to place them in different servers? If yes Then Y? If No then why do i get the blank page? I am using nusoap library for webservice. When I use my cleint script from my local machine I get these errors "Deprecated: Assigning the return value of new by reference is deprecated in D:\wamp\www\pranav_test\nusoap\lib\nusoap.php on line 6506 Fatal error: Class 'soapclient' not found in D:\wamp\www\pranav_test\stockclient.php on line 3" stockserver.php at server <?php function getStockQuote($symbol) { mysql_connect('localhost','root','******'); mysql_select_db('pranav_demo'); $query = "SELECT stock_price FROM stockprices " . "WHERE stock_symbol = '$symbol'"; $result = mysql_query($query); $row = mysql_fetch_assoc($result); return $row['stock_price']; } require('nusoap/lib/nusoap.php'); $server = new soap_server(); $server->configureWSDL('stockserver', 'urn:stockquote'); $server->register("getStockQuote", array('symbol' => 'xsd:string'), array('return' => 'xsd:decimal'), 'urn:stockquote', 'urn:stockquote#getStockQuote'); $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server->service($HTTP_RAW_POST_DATA); ?> stockclient.php <?php require_once('nusoap/lib/nusoap.php'); $c = new soapclient('http://192.168.1.20/pranav_test/stockserver.php'); $stockprice = $c->call('getStockQuote', array('symbol' => 'ABC')); echo "The stock price for 'ABC' is $stockprice."; ?> please help...

    Read the article

  • Cross join (pivot) with n-n table containing values

    - by Styx31
    I have 3 tables : TABLE MyColumn ( ColumnId INT NOT NULL, Label VARCHAR(80) NOT NULL, PRIMARY KEY (ColumnId) ) TABLE MyPeriod ( PeriodId CHAR(6) NOT NULL, -- format yyyyMM Label VARCHAR(80) NOT NULL, PRIMARY KEY (PeriodId) ) TABLE MyValue ( ColumnId INT NOT NULL, PeriodId CHAR(6) NOT NULL, Amount DECIMAL(8, 4) NOT NULL, PRIMARY KEY (ColumnId, PeriodId), FOREIGN KEY (ColumnId) REFERENCES MyColumn(ColumnId), FOREIGN KEY (PeriodId) REFERENCES MyPeriod(PeriodId) ) MyValue's rows are only created when a real value is provided. I want my results in a tabular way, as : Column | Month 1 | Month 2 | Month 4 | Month 5 | Potatoes | 25.00 | 5.00 | 1.60 | NULL | Apples | 2.00 | 1.50 | NULL | NULL | I have successfully created a cross-join : SELECT MyColumn.Label AS [Column], MyPeriod.Label AS [Period], ISNULL(MyValue.Amount, 0) AS [Value] FROM MyColumn CROSS JOIN MyPeriod LEFT OUTER JOIN MyValue ON (MyValue.ColumnId = MyColumn.ColumnId AND MyValue.PeriodId = MyPeriod.PeriodId) Or, in linq : from p in MyPeriods from c in MyColumns join v in MyValues on new { c.ColumnId, p.PeriodId } equals new { v.ColumnId, v.PeriodId } into values from nv in values.DefaultIfEmpty() select new { Column = c.Label, Period = p.Label, Value = nv.Amount } And seen how to create a pivot in linq (here or here) : (assuming MyDatas is a view with the result of the previous query) : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Month1 = line.Where(l => l.Period == "Month 1").Sum(l => l.Value), Month2 = line.Where(l => l.Period == "Month 2").Sum(l => l.Value), Month3 = line.Where(l => l.Period == "Month 3").Sum(l => l.Value), Month4 = line.Where(l => l.Period == "Month 4").Sum(l => l.Value) } But I want to find a way to create a resultset with, if possible, Month1, ... properties dynamic. Note : A solution which results in a n+1 query : from c in MyDatas group c by c.Column into line select new { Column = line.Key, Months = from l in line group l by l.Period into period select new { Period = period.Key, Amount = period.Sum(l => l.Value) } }

    Read the article

  • Storing statistics of multple data types in SQL Server 2008

    - by Mike
    I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be. When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations. This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain? Wondering if I'm missing something here. Is there a better way to do this? Thanks in advance!

    Read the article

  • SSRS - Oracle DB, Passing Date parameter

    - by davidl98
    Using SSRS with an Oracle Database. I need to prompt the user when running the report to enter a date for report. What is the best way to add in the parameter in my SSRS Report. Having problem finding the right date format. under the "Report Parameter" menu, I have setup the Report Parameters using the DateTime Datatype. Keep getting this error "ORA-01843: Not a Valid Month" Thank you for your help. Select a.OPR_Name, a.OPR, a.Trans_Desc, a.Trans_Start_Date, Cast(a.S_Date as date) as S_Date, Sum(a.Duration) as T From ( Select US_F.OPR_Name, ITH_F.OPR, ITH_F.ITH_RID, ITH_F.TRANSACT, Transact.DESC_1 as Trans_Desc, To_CHAR(ITH_F.Start_Time,'DD-Mon-YY') as Trans_Start_Date, To_CHAR(ITH_F.Start_Time,'MM/DD/YYYY') as S_Date, Substr(To_CHAR(ITH_F.Start_Time,'HH24:MI'),1,6) as Start_Time, To_CHAR(ITH_F.End_Time,'DD-Mon-YY') as Trans_End_Date, Substr(To_CHAR(ITH_F.End_Time,'HH24:MI'),1,6) as End_Time, Cast(Case When To_CHAR(ITH_F.Start_Time,'DD-Mon-YY') = To_CHAR(ITH_F.End_Time,'DD-Mon-YY') Then (((To_CHAR(ITH_F.End_Time,'SSSSS') - To_CHAR(ITH_F.Start_Time,'SSSSS')) / 60))/60 Else ((86399 - (To_CHAR(ITH_F.Start_Time,'SSSSS')) + To_CHAR(ITH_F.End_Time,'SSSSS'))/60)/60 End as Decimal(3,1)) as Duration from Elite_76_W1.ITH_F Left Join Elite_76_W1.Transact on Transact.Transact = ITH_F.Transact Left Join Elite_76_W1.US_F on US_F.OPR = ITH_F.OPR Where ITH_F.TRANSACT not in ('ASN','QC','LGOT') ) a Where a.S_Date = @Event_Date Having Sum(a.Duration) < 0 Group By a.OPR_Name, a.OPR, a.Trans_Desc, a.Trans_Start_Date, a.S_Date Order by a.OPR_Name

    Read the article

  • SQL Server insert with XML parameter - empty string not converting to null for numeric

    - by Mayo
    I have a stored procedure that takes an XML parameter and inserts the "Entity" nodes as records into a table. This works fine unless one of the numeric fields has a value of empty string in the XML. Then it throws an "error converting data type nvarchar to numeric" error. Is there a way for me to tell SQL to convert empty string to null for those numeric fields in the code below? -- @importData XML <- stored procedure param DECLARE @l_index INT EXECUTE sp_xml_preparedocument @l_index OUTPUT, @importData INSERT INTO dbo.myTable ( [field1] ,[field2] ,[field3] ) SELECT [field1] ,[field2] ,[field3] FROM OPENXML(@l_index, 'Entities/Entity', 1) WITH ( field1 int 'field1' ,field2 varchar(40) 'field2' ,field3 decimal(15, 2) 'field3' ) EXECUTE sp_xml_removedocument @l_index EDIT: And if it helps, sample XML. Error is thrown unless I comment out field3 in the code above or provide a value in field3 below. <?xml version="1.0" encoding="utf-16"?> <Entities> <Entity> <field1>2435</field1> <field2>843257-3242</field2> <field3 /> </Entity> </Entities>

    Read the article

  • JAXB, BigDecimal or double?

    - by Alex
    I working on different web-services, and I always use WSDL First. JAXB generates for a Type like: <xsd:simpleType name="CurrencyFormatTyp"> <xsd:restriction base="xsd:decimal"> <xsd:totalDigits value="13"/> <xsd:fractionDigits value="2"/> <xsd:minInclusive value="0.01"/> </xsd:restriction> </xsd:simpleType> a Java binding type BigDecimal (as it's mentioned in JAXB specification). When I then do some simple arithmetic operation with values of the type double (which are stored in a database and mapped via hibernate to the type double) I run into trouble. <ns5:charge>0.200000000000000011102230246251565404236316680908203125</ns5:charge> <ns5:addcharge>0.0360000000000000042188474935755948536098003387451171875</ns5:addcharge> <ns5:tax>0.047199999999999998900879205621095024980604648590087890625</ns5:tax> <ns5:totalextax>0.2360000000000000153210777398271602578461170196533203125</ns5:totalextax> What would be the right way? Convert all my values into double (JAXB binding from BigDecimal to double) Hibernate mapping double to Bigdecimal and do all my arithmetic operations in one object type.

    Read the article

  • C# Textbox validation should only accept integer values, but allows letters as well

    - by sonny5
    if (textBox1.Text != "") // this forces user to enter something { // next line is supposed to allow only 0-9 to be entered but should block all... // ...characters and should block a backspace and a decimal point from being entered.... // ...but it is also allowing characters to be typed in textBox1 if(!IsNumberInRange(KeyCode,48,57) && KeyCode!=8 && KeyCode!=46) // 46 is a "." { e.Handled=true; } else { e.Handled=false; } if (KeyCode == 13) // enter key { TBI1 = System.Convert.ToInt32(var1); // converts to an int Console.WriteLine("TBI1 (var1 INT)= {0}", var1); Console.WriteLine("TBI1= {0}", TBI1); } if (KeyCode == 46) { MessageBox.Show("Only digits...no dots please!"); e.Handled = !char.IsDigit(e.KeyChar) && !char.IsControl(e.KeyChar); } } else { Console.WriteLine("Cannot be empty!"); } // If I remove the outer if statement and skip checking for an empty string, then // it prevents letters from being entered in the textbox. I need to do both, prevent an // empty textbox AND prevent letters from being entered. // thanks, Sonny5

    Read the article

  • C# UserControl Validation

    - by Barry
    Hi, I have a UserControl with a Tab Control containing three tabs. Within the tabs are multiple controls - Datetimepickers, textboxes, comboboxes. There is also a Save button which when clicked, calls this.ValidateChildren(ValidationConstraints.Enabled) Now, I click save and a geniune validation error occurs. I correct the error and then click save again - valdiation errors occur on comboboxes on a different tab. If I navigate to this tab and click save, everything works fine. How can this be? I haven't changed any values in the comboboxes so how can the fail validation then pass validation? The comboboxes are bound to a dataset with their selectedValue and Text set. I just don't understand what is happening here. This behaviour also occurs for some textboxes too. The validation rule is that they have to be a decimal - the default value is zero, which is allowed. The same thing happens, they fail validation the first time - I make no changes, click save again and they pass validation. If you need any further information please let me know thanks Barry

    Read the article

  • Numeric Data Entry in WPF

    - by Matt Hamilton
    How are you handling the entry of numeric values in WPF applications? Without a NumericUpDown control, I've been using a TextBox and handling its PreviewKeyDown event with the code below, but it's pretty ugly. Has anyone found a more graceful way to get numeric data from the user without relying on a third-party control?private void NumericEditPreviewKeyDown(object sender, KeyEventArgs e) { bool isNumPadNumeric = (e.Key >= Key.NumPad0 && e.Key <= Key.NumPad9) || e.Key == Key.Decimal; bool isNumeric = (e.Key >= Key.D0 && e.Key <= Key.D9) || e.Key == Key.OemPeriod; if ((isNumeric || isNumPadNumeric) && Keyboard.Modifiers != ModifierKeys.None) { e.Handled = true; return; } bool isControl = ((Keyboard.Modifiers != ModifierKeys.None && Keyboard.Modifiers != ModifierKeys.Shift) || e.Key == Key.Back || e.Key == Key.Delete || e.Key == Key.Insert || e.Key == Key.Down || e.Key == Key.Left || e.Key == Key.Right || e.Key == Key.Up || e.Key == Key.Tab || e.Key == Key.PageDown || e.Key == Key.PageUp || e.Key == Key.Enter || e.Key == Key.Return || e.Key == Key.Escape || e.Key == Key.Home || e.Key == Key.End); e.Handled = !isControl && !isNumeric && !isNumPadNumeric; }

    Read the article

  • How do I repopulate the view model in ASP.NET MVC 2 after a validation error?

    - by Keltex
    I'm using ASP.NET MVC 2 and here's the issue. My View Model looks something like this. It includes some fields which are edited by the user and others which are used for display purposes. Here's a simple version public class MyModel { public decimal Price { get; set; } // for view purpose only [Required(ErrorMessage="Name Required")] public string Name { get; set; } } The controller looks something like this: public ActionResult Start(MyModel rec) { if (ModelState.IsValid) { Repository.SaveModel(rec); return RedirectToAction("NextPage"); } else { // validation error return View(rec); } } The issue is when there's a validation error and I call View(rec), I'm not sure the best way to populate my view model with the values that are displayed only. The old way of doing it, where I pass in a form collection, I would do something like this: public ActionResult Start(FormCollection collection) { var rec = Repository.LoadModel(); UpdateModel(rec); if (ModelState.IsValid) { Repository.SaveModel(rec); return RedirectToAction("NextPage"); } else { // validation error return View(rec); } } But doing this, I get an error on UpdateModel(rec): The model of type 'MyModel' could not be updated. Any ideas?

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • Problem with XML encoding of database contents with Latin characters

    - by user89691
    I have an ASP Access database that contains strings in various European languages. The database was populated prior by agents in the respective countries. It contains entries with accented etc characters as you would expect. If I open the database with MS Access these characters show up fine. For example the the German equivalent of "Open" shows as "Öffnen" (hopefully you can see an "O" with 2 dots above it!). I have ASP code that reads the database and returns records in XML. The text is passed to XMLEncode to construct the XML, but that only seems to deal with the 5 specials like "<", "&", etc. If I dump the XML the accented characters are unchanged. <English>Open</English> <German>Öffnen</German> If I look at the raw packets with Wireshark I see that the "Ö" byte is hex D6, which appears to be it's decimal Unicode and ISO 8859-1 value. The problem starts when I try to parse the XML in client-side JS. I get: "An invalid character was found in text content" from IE. FF and Chrome happily accept the XML without hiccup but the browser shows the "Ö" character as a diamond with a question mark inside. http://www.validome.org/xml/validate/ reports "encoding error." http://www.w3schools.com/dom/dom_validate.asp thinks it is fine. The XML is UTF-8 encoded. What do I need to do to have IE accept my XML without complaint? What do I need to do to have browsers display the stuff correctly?

    Read the article

  • Use Math class to calculate

    - by LC
    Write a JAVA program that calculates the surface area of a cylinder, C= 2( p r2) + 2 p r h, where r is the radius and h is the height of the cylinder. Allow the user to enter in a radius and a height. Format the output to three decimal places. Use the constant PI and the method pow() from the Math class. This is what I've done so far but it can not run. Can anyone help me ? import Java.util Scanner; public class Ex5 { public static void main (String[] args) { Scanner input.new Scanner(System.in); double radius,height,area; System.out.print("Please enter the radius of the Cylinder"); radius = input.nextDouble(); System.out.print("Please enter the height of the Cylinder"); height = input.nextDouble(); area = (4*Math.PI*radius)+2*Math.PIMath.POW(radius,2); System.out.println("The surface of the cylinder" + area); } }

    Read the article

  • codingBat separateThousands using regex (and unit testing how-to)

    - by polygenelubricants
    This question is a combination of regex practice and unit testing practice. Regex part I authored this problem separateThousands for personal practice: Given a number as a string, introduce commas to separate thousands. The number may contain an optional minus sign, and an optional decimal part. There will not be any superfluous leading zeroes. Here's my solution: String separateThousands(String s) { return s.replaceAll( String.format("(?:%s)|(?:%s)", "(?<=\\G\\d{3})(?=\\d)", "(?<=^-?\\d{1,3})(?=(?:\\d{3})+(?!\\d))" ), "," ); } The way it works is that it classifies two types of commas, the first, and the rest. In the above regex, the rest subpattern actually appears before the first. A match will always be zero-length, which will be replaceAll with ",". The rest basically looks behind to see if there was a match followed by 3 digits, and looks ahead to see if there's a digit. It's some sort of a chain reaction mechanism triggered by the previous match. The first basically looks behind for ^ anchor, followed by an optional minus sign, and between 1 to 3 digits. The rest of the string from that point must match triplets of digits, followed by a nondigit (which could either be $ or \.). My question for this part is: Can this regex be simplified? Can it be optimized further? Ordering rest before first is deliberate, since first is only needed once No capturing group Unit testing part As I've mentioned, I'm the author of this problem, so I'm also the one responsible for coming up with testcases for them. Here they are: INPUT, OUTPUT "1000", "1,000" "-12345", "-12,345" "-1234567890.1234567890", "-1,234,567,890.1234567890" "123.456", "123.456" ".666666", ".666666" "0", "0" "123456789", "123,456,789" "1234.5678", "1,234.5678" "-55555.55555", "-55,555.55555" "0.123456789", "0.123456789" "123456.789", "123,456.789" I haven't had much experience with industrial-strength unit testing, so I'm wondering if others can comment whether this is a good coverage, whether I've missed anything important, etc (I can always add more tests if there's a scenario I've missed).

    Read the article

  • How to preprocess text to do OCR error correction

    - by eaglefarm
    Here is what I'm trying to accomplish: I need to get a several large text files from a computer that is not networked and has no other output except a printer. I tried printing the text, then scanning the printout with OCR to recover the text on another computer but the OCR gets lots of errors (1 vs l, o vs 0, O vs D, etc). To solve this I am thinking of writing a program to process (annotate?) the text file, before printing it, so that the errors can be corrected from the text output of the OCR program. For example, for 1 (number one) vs l (letter L), I could change the text like this: sample inserting \nnn after characters that are frequently wrong in the OCR results: sampl\108e Then I can write another program to examine the file, looking for \nnn and check the character before the \nnn (where nnn is the ascii code in decimal) and fix it if necessary. Of course the program will have to recognize that the \nnn may have errors too but at least it knows that the nnn are digits and can easily correct them. I think I would add a CRC on each line so that any line that isn't corrected perfectly can be flagged as having a problem. Has anyone done anything like this? If there is an existing way of doing this I'd rather not reinvent the wheel. Or any suggestions for annotation format that would help solve this problem would be helpful too.

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Using stdint.h and ANSI printf?

    - by nn
    Hi, I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying. I will be using some C99 functionality, but trying to conform to ANSI C. Currently I have the following in my bignum library: #include <stdint.h> #if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64) typedef uint64_t u_w; typedef uint32_t u_hw; #define BIGNUM_DIGITS 2048 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT32_MAX #define U_HW_MIN UINT32_MIN #define U_W_MAX UINT64_MAX #define U_W_MIN UINT64_MIN #else typedef uint32_t u_w; typedef uint16_t u_hw; #define BIGNUM_DIGITS 4096 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT16_MAX #define U_HW_MIN UINT16_MIN #define U_W_MAX UINT32_MAX #define U_W_MIN UINT32_MIN #endif typedef struct bn { int sign; int n_digits; // #digits should exclude carry (digits = limbs) int carry; u_hw tab[BIGNUM_DIGITS]; } bn; As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal. The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.

    Read the article

  • Is there a way to effect user defined data types in MySQL?

    - by Dancrumb
    I have a database which stores (among other things), the following pieces of information: Hardware IDs BIGINTs Storage Capacities BIGINTs Hardware Names VARCHARs World Wide Port Names VARCHARs I'd like to be able to capture a more refined definition of these datatypes. For instance, the hardware IDs have no numerical significance, so I don't care how they are formatted when displayed. The Storage Capacities, however, are cardinal numbers and, at a user's request, I'd like to present them with thousands and decimal separators, e.g. 123,456.789. Thus, I'd like to refine BIGINT into, say ID_NUMBER and CARDINAL. The same with Hardware Names, which are simple text and WWPNs, which are hexstrings, e.g. 24:68:AC:E0. Thus, I'd like to refine VARCHAR into ENGLISH_WORD and HEXSTRING. The specific datatypes I made up are just for illustrative purposes. I'd like to keep all this information in one place and I'm wondering if anybody knows of a good way to hold this all in my MySQL table definitions. I could use the Comment field of the table definition, but that smells fishy to me. One approach would be to define the data structure elsewhere and use that definition to generate my CREATE TABLEs, but that would be a major rework of the code that I currently have, so I'm looking for alternatives. Any suggestions? The application language in use is Perl, if that helps.

    Read the article

  • How to implement Excel Solver functionality in C#?

    - by Vic
    Hi, I have an application in C#, I need to do some optimization calculations, like Excel Solver Add-in does, one option is certainly to write my own solver implementation, but I'm kind of short of time, so I'm looking into libraries that already exist that can help me with this. I've been trying the Microsoft Solver Foundation, which seems pretty neat and cool, the problem is that it doesn't seem to work with the kind of calculations that I need to do. At the end of this question I'm adding the information about the calculations I need to perform and optimize. So basically my question is if any of you know of any other library that I can use for this purpose, or any tutorial that can help to do my own solver, or any idea that gives me a lead to solve this issue. Thanks. Additional Info: This is the data I need to calculate: I have 7 variables, lets call them var1, var2,...,var7 The constraints for these variables are: All of them need to be 0 <= varn <= 0.5 (where n is the number of the variable) The sum of all the variables should be equal to 1 The objective is to maximize the target formula, which in Excel looks like this: (MMULT(TRANSPOSE(L26:L32),M14:M20)) / (SQRT(MMULT(MMULT(TRANSPOSE(L26:L32),M4:S10),L26:L32))) The range that you see in this formula, L26:L32, is actually the range with the variables from above, var1, var2,..., varn. M14:M20 and M4:S10 are ranges with data that I get from different sources, there are more likely decimal values. As I said before, I was using Microsoft Solver Foundation, I modeled pretty much everything with it, I created functions that handle the operations of the target formula, but when I tried to solve the model it always fail, I think it is because of the complexity of the operations. In any case, I just wanted to show these data so you can have an idea about the kind of calculations that I need to implement.

    Read the article

  • Matlab-Bisection-Newton-Secant , finding roots?

    - by i z
    Hello and thanks in advance for your possible help ! Here's my problem: I have 2 functions f1(x)=14.*x*exp(x-2)-12.*exp(x-2)-7.*x.^3+20.*x.^2-26.*x+12 f2(x)=54.*x.^6+45.*x.^5-102.*x.^4-69.*x.^3+35.*x.^2+16.*x-4 Make the graph for those 2, the first one in [0,3] and the 2nd one in [-2,2]. Find the 3 roots with accuracy of 6 decimal digits using a) bisection ,b) newton,c)secant.For each root find the number of iterations that have been made. For Newton-Raphson, find which roots have quadratic congruence and which don't. What is the main common thing that roots with no quadratic congruence (Newton's method)? Why ? Excuse me if i ask silly things, but i'm asked to do this with no Matlab courses and I'm trying to learn it myself. There are many issues i have with this exercise . Questions : 1.I only see 2 roots in the graph for the f1 function and 4-5 (?) roots for the function f2 and not 3 roots as the exercise says. Here's the 2 graphs : http://postimage.org/image/cltihi9kh/ http://postimage.org/image/gsn4sg97f/ Am i wrong ? Do both have only 3 roots in [0,3] and [-2,2] ? Concerning the Newton's method , how am i supposed to check out which roots have quadratic congruence and which not??? Accuracy means tolerance e=10^(-6), right ?

    Read the article

  • Writing re-entrant lexer with Flex

    - by Viet
    I'm newbie to flex. I'm trying to write a simple re-entrant lexer/scanner with flex. The lexer definition goes below. I get stuck with compilation errors as shown below (yyg issue): reentrant.l: /* Definitions */ digit [0-9] letter [a-zA-Z] alphanum [a-zA-Z0-9] identifier [a-zA-Z_][a-zA-Z0-9_]+ integer [0-9]+ natural [0-9]*[1-9][0-9]* decimal ([0-9]+\.|\.[0-9]+|[0-9]+\.[0-9]+) %{ #include <stdio.h> #define ECHO fwrite(yytext, yyleng, 1, yyout) int totalNums = 0; %} %option reentrant %option prefix="simpleit_" %% ^(.*)\r?\n printf("%d\t%s", yylineno++, yytext); %% /* Routines */ int yywrap(yyscan_t yyscanner) { return 1; } int main(int argc, char* argv[]) { yyscan_t yyscanner; if(argc < 2) { printf("Usage: %s fileName\n", argv[0]); return -1; } yyin = fopen(argv[1], "rb"); yylex(yyscanner); return 0; } Compilation errors: vietlq@mylappie:~/Desktop/parsers/reentrant$ gcc lex.simpleit_.c reentrant.l: In function ‘main’: reentrant.l:44: error: ‘yyg’ undeclared (first use in this function) reentrant.l:44: error: (Each undeclared identifier is reported only once reentrant.l:44: error: for each function it appears in.)

    Read the article

  • tikz: set appropriate x value for a node

    - by basweber
    This question resulted from the question here I want to produce a curly brace which spans some lines of text. The problem is that I have to align the x coordinate manually, which is not a clean solution. Currently I use \begin{frame}{Example} \begin{itemize} \item The long Issue 1 \tikz[remember picture] \node[coordinate,yshift=0.7em] (n1) {}; \\ spanning 2 lines \item Issue 2 \tikz[remember picture] \node[coordinate, xshift=1.597cm] (n2) {}; \item Issue 3 \end{itemize} \visible<2->{ \begin{tikzpicture}[overlay,remember picture] \draw[thick,decorate,decoration={brace,amplitude=5pt}] (n1) -- (n2) node[midway, right=4pt] {One and two are cool}; \end{tikzpicture} } % end visible \end{frame} which produces the desired result: The unsatisfying thing is, that I had to figure out the xshift value of 1.597cm by trial and error (more or less) Without xshift argument the result is: I guess there is an elegant way to avoid the explicit xshift value. The best way would it imho be to calculate the maximum x value of two nodes and use this, (as already suggested by Geoff) But it would already be very handy to be able to explicitly define the absolute xvalues of both nodes while keeping their current y values. This would avoid the fiddly procedure of adapting the third post decimal position to ensure that the brace looks vertical.

    Read the article

  • C#JSON serialization

    - by Bridget the Midget
    I'm trying out the HighStock library for creating stock charts. To fill the chart with data, their example specifies this source. The first parameter is unixtime in milliseconds and the second parameter is the stock closing price. I don't know if this is valid json, but I would argue that the following would be a more appropriate way of writing json. [{"Closing":63.15000,"Date":1262559600000},{"Closing":64.75000,"Date":1262646000000}, ... I guess that I have no other option than to adapt to HighStocks syntax. I could solve this by looping and add correct syntax to a string, but that seems rudimentary. Would it be more wise to serialize C# objects to create my json, and if that's the case - how can I reach the syntax specified in the example? Lets just say this is my c# object: public class Quote { public double Date { get; set; } public decimal Closing { get; set; } } Am I making it unnecessary complex? Should I just format a json string?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >