Search Results

Search found 89673 results on 3587 pages for 'code conversion'.

Page 514/3587 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Android Java: Way to effectively pause system time while debugging?

    - by TheMaster42
    In my project, I call nanoTime and use that to get a deltaTime which I pass to my entities and animations. However, while debugging (for example, stepping through my code), the system time on my phone is happily chugging along, so it's impossible to look at, say, two sequential frames of data in the debugger (since by the time I'm done looking at the first frame, the system time has continued to move ahead by seconds or even minutes). Is there a programming practice or method to pause the system clock (or a way for my code to intercept and fake my deltaTime) whenever I pause execution from the debugger? Additional Information: I'm using Eclipse Classic with the ADT plugin and a Samsung SII, coding in Java. My code invoking nanoTime: http://pastebin.com/0ZciyBtN I do all display via a Canvas object (2D sprites and animations).

    Read the article

  • VB 2010 LOGIN 3-TIMES LOOP [migrated]

    - by stargaze07
    How to put a loop on my log in code it's like the program will end if the user inputs a wrong password/username for the third time? At this point I'm having a hard time putting the loop code. This is my LogIn Code in VB 2010 Private Sub btnLogIn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnLogIn.Click Me.Refresh() Dim login = Me.TblUserTableAdapter1.UsernamePasswordString(txtUser.Text, txtPass.Text) If login Is Nothing Then MessageBox.Show("Incorrect login details", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Else Dim ok As DialogResult ok = MessageBox.Show("Login Successful", "Dantiña's Catering Maintenance System", MessageBoxButtons.OK, MessageBoxIcon.Information) MMenu.Show() MMenu.lblName.Text = "Welcome " & Me.txtUser.Text & " !" If txtPass.Text <> "admin" Then MMenu.Button1.Enabled = False ProdMaintenance.GroupBox1.Visible = True MMenu.Button2.Enabled = True MMenu.Button3.Enabled = True MMenu.Button4.Enabled = True Else MMenu.Button1.Enabled = True ProdMaintenance.GroupBox1.Visible = True MMenu.Button2.Enabled = True MMenu.Button3.Enabled = True MMenu.Button4.Enabled = True End If Me.Refresh() Me.Hide() End If End Sub

    Read the article

  • jQuery Mobile Frame Forwarding [on hold]

    - by Nizam
    I have a site that does a standard forward to another site [301 Redirect]. In the redirected site, I detect if the device is a mobile using the following code: if (/Android|webOS|iPhone|iPad|iPod|BlackBerry/i.test(navigator.userAgent)) { window.location.replace("Mobile/Login/Login.aspx") } else { window.location.replace("Apps/Login/Login.aspx") } It works and jQuery mobile makes the site fits device very well. To do so, I use the following code in ASPX page: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" /> The problem is that I am wanting to frame forward - instead of standard forward - (there are a lot of advantages), but the site is not fitted for the device anymore, and even the icon I have chosen for my page is not well defined anymore. The code I use to set the icon of page is: <link rel="apple-touch-icon" href="../../Apps/Imagens/Icone.png" /> My site is hosted by Mochahost My question is: Is there anything I could do to make it works?

    Read the article

  • Is backing up a MySQL database in GIT a good idea?

    - by wobbily_col
    I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in synch. But GIT is a designed for code, not for data. As such it will be doing a lot of extra work diffing the mysql dump every commit, which is not really necessary. If I compress the file before storing it, will git diff the files? (The dump file is currently 100MB uncompressed, 5.7Mb when bzipped). Edit: the code and database schema definitions are already in GIT, it is really the data I am concerned about backing up now.

    Read the article

  • Application of LGPL license on a simple algorithm

    - by georgesl
    The "scope" of the GNU license is troubling me : I know it has been answered many times ( here, here, ... ) but shouldn't we take into consideration the complexity and originality of a code before using GPL license ? I explain : I'm working on a pet project using the DTW algorithm that I have written in C using the pseudo-code given on the wikipedia page . At one point I decided to change it for a C++ implementation ( just for hone my c++ skill ) . After doing so, I've looked for an existing implementation on the web, to compare the "cleanliness" of it, and I found this one : Vectored DTW implementation, which is part of limproved, a C++ library licensed under GPL v3 . Personnally, I don't mind the GNU license because it is a personnal project, which will never led to any kind of commercial purpose, but I wonder if this implementation can abide a company using it to open their code ( and other FOSS permissions ). Theoretically, I think it can ( I may be wrong :p ), but the algorithm in question is so simple (and old) that it should not.

    Read the article

  • Exclude PHP from output from WYSIWYG in CMS

    - by bytewalls
    I'm writing a basic CMS for one of my sites and have run into an issue where some pages need to dynamically serve PHP and JS, where as others are plain HTMl. I want there to be a setting which will allow this for the pages that need it and will load ACE editor instead of a different wysiwyg editor. The challenge here is that on the pages which I do not explicitly tell it there will be code, I want to reject any inputs that code. I can set it up to insert a for all pages without JS, but how can I keep php code from running?

    Read the article

  • Content of AUTHORS file

    - by user14284
    GNU recommend make AUTHORS file for list of authors and contributos of a program. But how many "levels" of authors and contributors should contain the file? E.g. I write a program foo, that actively use some library. Should I include authors of the library in the AUTHORS? It seems to yes, because total code of foo contain code from library. But if yes, I should include also authors of all others libraries, including standard libraries of compiler, authors of the compiler and other tools for producing final executable code, authors of OS... When I should stop?

    Read the article

  • Good way to extract strings to resource

    - by Bart Friederichs
    I am using Visual Studio 2010 and we just decided to get started on localization of our code. We want to use the per-form resource file in combination with a separate resource file for static strings, called strings.resx. I was wondering if there is a good way to extra static strings (we already have quite some code we need to translate) to the strings.resx file? I have tried this plugin: Resource Refactoring 2010, but it doesn't work completely. It creates the correct new resource, but the strings aren't refactored in the code. Also, the tool seems to be abandoned by its developer. Is there a good plugin that can do this?

    Read the article

  • design for interruptable operations

    - by tpaksu
    I couldn't find a better topic but here it is; 1) When user clicks a button, code starts t work, 2) When another button is clicked, it would stop doing whatever it does and start to run the second button's code, 3) Or with not user interaction, an electrical power down detected from a connected device, so our software would cancel the current event and start doing the power down procedure. How is this design mostly applied to code? I mean "stop what you are doing" part? If you would say events, event handlers etc. how do you bind a condition to the event? and how do you tell the program without using laddered if's to end it's process? method1(); if (powerdown) return; method2(); if (powerdown) return; etc.

    Read the article

  • for vs. foreach vs. LINQ

    - by beccoblu
    When I write code in Visual Studio, ReSharper (God bless it!) often suggests me to change my old-school for loop in the more compact foreach form. And often, when I accept this change, ReSharper goes a step forward, and suggests me to change it again, in a shiny LINQ form. So, I wonder: are there some real advantages, in these improvements? In pretty simple code execution, I cannot see any speed boost (obviously), but I can see the code becoming less and less readable... So I wonder: is it worth it?

    Read the article

  • How to solve programming problems using logic? [closed]

    - by md nth
    I know these principles: Define the constrains and operations,eg constrains are the rules that you cant pass and what you want determined by the end goal, operations are actions you can do, "choices" . Buy some time by solving easy and solvable piece. Halving the difficulty by dividing the project into small goals and blocks. The more blocks you create the more hinges you have. Analogies which means : using other code blocks, yours or from other programmers . which has problem similar to the current problem. Experiments not guessing by writing "predicted end" code, in other word creating a hypothesis, about what will happen if you do this or that. Use your tools first, don't begin with a unknown code first. By making small goals you ll not get frustrated. Start from smallest problem. Are there other principles?

    Read the article

  • Difficult to replicate objects (object Customer) on the list? [migrated]

    - by gandolf
    I wrote a program that does work with files like delete and update, store, and search And all customers But I have a problem with the method is LoadAll Once the data are read from the file and then Deserialize the object becomes But when I want to save the list of objects in the list are repeated. How can I prevent the duplication in this code? var customerStr = File.ReadAllLines (address); The code is written in CustomerDataAccess class DataAccess Layer. Project File The main problem with the method LoadAll Code: public ICollection<Customer> LoadAll() { var alldata = File.ReadAllLines(address); List<Customer> lst = new List<Customer>(); foreach (var s in alldata) { var objCustomer = customerSerializer.Deserialize(s); lst.Add(objCustomer); } return lst; }

    Read the article

  • Converting a byte array to a X.509 certificate

    - by ddd
    I'm trying to port a piece of Java code into .NET that takes a Base64 encoded string, converts it to a byte array, and then uses it to make a X.509 certificate to get the modulus & exponent for RSA encryption. This is the Java code I'm trying to convert: byte[] externalPublicKey = Base64.decode("base 64 encoded string"); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(externalPublicKey); Key publicKey = keyFactory.generatePublic(publicKeySpec); RSAPublicKey pbrtk = (java.security.interfaces.RSAPublicKey) publicKey; BigInteger modulus = pbrtk.getModulus(); BigInteger pubExp = pbrtk.getPublicExponent(); I've been trying to figure out the best way to convert this into .NET. So far, I've come up with this: byte[] bytes = Convert.FromBase64String("base 64 encoded string"); X509Certificate2 x509 = new X509Certificate2(bytes); RSA rsa = (RSA)x509.PrivateKey; RSAParameters rsaParams = rsa.ExportParameters(false); byte[] modulus = rsaParams.Modulus; byte[] exponent = rsaParams.Exponent; Which to me looks like it should work, but it throws an exception when I use the base 64 encoded string from the Java code to generate the X509 certificate. Is Java's X.509 implementation just incompatible with .NET's, or am I doing something wrong in my conversion from Java to .NET? Or is there simply no conversion from Java to .NET in this case?

    Read the article

  • Customizing Django form widgets? - Django

    - by RadiantHex
    Hi folks, I'm having a little problem here! I have discovered the following as being the globally accepted method for customizing Django admin field. from django import forms from django.utils.safestring import mark_safe class AdminImageWidget(forms.FileInput): """ A ImageField Widget for admin that shows a thumbnail. """ def __init__(self, attrs={}): super(AdminImageWidget, self).__init__(attrs) def render(self, name, value, attrs=None): output = [] if value and hasattr(value, "url"): output.append(('<a target="_blank" href="%s">' '<img src="%s" style="height: 28px;" /></a> ' % (value.url, value.url))) output.append(super(AdminImageWidget, self).render(name, value, attrs)) return mark_safe(u''.join(output)) I need to have access to other field of the model in order to decide how to display the field! For example: If I am keeping track of a value, let us call it "sales". If I wish to customize how sales is displayed depending on another field, let us call it "conversion rate". I have no obvious way of accessing the conversion rate field when overriding the sales widget! Any ideas to work around this would be highly appreciated! Thanks :)

    Read the article

  • How to translate CCSID 65535 in SQuirrel from a DB2 on an iseries

    - by ZS6JCE
    I am new to SQuirrel SQL. I need some help to translating CCSID 65535 into ASCII, UNICODE (or anything human readable) I am using the JDBC driver per the following guide. According to IBM's website: What character conversion issues must my program deal with? The IBM i database uses EBCDIC to store text. Java uses Unicode. The JDBC driver handles all conversion between character sets, so your program should not have to worry about it. but I think they refer to CCSID 37 and not 65535(Hex). I have got the following info, from my DB2 DB Doing DSPFD gives me: Coded character set identifier . . . . . . : CCSID 65535 Doing DSPFFD gives me: TXT CHAR 3 3 41 Both Text Field text . . . . . . . . . . . . . . . : Text Coded Character Set Identifier . . . . . : 65535 But the SQuirrel query result for the TXT field is: 5c c1 c4 c4 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 c1 40 7e 40 c2 40 4e 40 c3 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 Which should be translated to something like: *ADD A = B + C

    Read the article

  • PartCover 2.5.3 win 7 x64

    - by user329814
    Could you tell me how you got PartCover running with VS2008 and win 7 x64? Based on this post http://stackoverflow.com/questions/256287/how-do-i-run-partcover-in-x64-windows, I ran c:\Program Files (x86)\Gubka Bob\PartCover .NET 2.3>CorFlags.exe PartCover.exe / 32BIT+ /Force with result Microsoft (R) .NET Framework CorFlags Conversion Tool. Version 3.5.21022.8 Copyright (c) Microsoft Corporation. All rights reserved. corflags : warning CF011 : The specified file is strong name signed. Using /Force will invalidate the signature of this image and will require the assembly to be resigned. and c:\Program Files (x86)\NUnit 2.5.2\bin\net-2.0>CorFlags.exe nunit.exe /32BIT+ /Force with result Microsoft (R) .NET Framework CorFlags Conversion Tool. Version 3.5.21022.8 Copyright (c) Microsoft Corporation. All rights reserved. Also, based on my discussion http://stackoverflow.com/questions/2546340/using-partcover-2-3-with-net-4-0-runtime/2964333#2964333, I also tried to use the x86 version of NUnit What I'm trying to run coverage for is the c# money sample for NUnit 2.5.2 I get the same System.Threading.ThreadInterruptedException --- System.Runtime.InteropServices.COMException (0x80040153): Retrieving the COM class factory for component with CLSID {FB20430E-CDC9-45D7-8453-272268002E08} failed due to the following error: 80040153 Thank you Edit: same thing with PartCover 2.2 My settings: exe file: C:\Program Files (x86)\NUnit 2.5.2\bin\net-2.0\nunit-console-x86.exe working dir: c:\Program Files (x86)\NUnit 2.5.2\samples\csharp\money\ work arg: /config=c:\Program Files (x86)\NUnit 2.5.2\samples\csharp\money\cs-money.csproj rules: +[]

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • How to Watch Youtube Videos on PSP with iMoviesoft FLV Converter

    - by user312417
    Do you have worried about it? You can not watch Youtube videos anytime, anywhere.It is so boring on the way to work and home.How you want to be able to enjoy the wonderful Youtube Video on PSP that you can watch them on the way to home, home on bus. This artice will tell you about how to convert Youtube VIdeos to PSP Player, take "Alice.in.Wonderland" as an example, We can use iMoviesoft FLV Converter to convert it to PSP video file. iMoviesoft FLV Converter is a powerful FLV Converter which can convert FLV and YouTube Videos to almost any video formats, with excellent conversion speed and quality, such as converting FLV to MP4, FLV to AVI, FLV to WMV, FLV to MPEG etc. Furthermore, it can also easily convert video files to some popular audio formats, such as WMA, MP3, M4A, AAC, etc. You can convert FLV and YouTube videos to PSP, iPod, iPhone, Zune video player and other portable video players. After easy and wonderful conversion, you can fully enjoy videos on your PSP, iPod, iPhone and some other portable video players. Besides, you can also use it to join videos. Merge several videos into one output PSP video and enjoy them conveniently. You can also trim your favarite clips or remove the video black edges by [iMoviesoft FLV Converter. Hope to help every Video Enthusiasts.

    Read the article

  • Base class -> Derived class and vice-versa conversions in C++

    - by Ivan Nikolaev
    Hi! I have the following example code: #include <iostream> #include <string> using namespace std; class Event { public: string type; string source; }; class KeyEvent : public Event { public: string key; string modifier; }; class MouseEvent : public Event { public: string button; int x; int y; }; void handleEvent(KeyEvent e) { if(e.key == "ENTER") cout << "Hello world! The Enter key was pressed ;)" << endl; } Event generateEvent() { KeyEvent e; e.type = "KEYBOARD_EVENT"; e.source = "Keyboard0"; e.key = "SPACEBAR"; e.modifier = "none"; return e; } int main() { KeyEvent e = generateEvent(); return 0; } I can't compile it, G++ throws an error of kind: main.cpp: In function 'int main()': main.cpp:47:29: error: conversion from 'Event' to non-scalar type 'KeyEvent' requested I know that the error is obvious for C++ guru's, but I can't understand why I can't do the conversion from base class object to derived one. Can someone suggest me the solution of the problem that I have? Thx in advice

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Figuring out the performance limitation of an ADC on a PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a microcontroller like PIC for an analog-to-digital application. This would be preferable to using external A/D chips. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! Here's the simplest example: PIC10F220 is the simplest possible PIC with an ADC. Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet? Any pointers would be much appreciated! AKE

    Read the article

  • Oracle Data Provider and casting

    - by mrjoltcola
    I use Oracle's specific data provider, not the Microsoft provider that is being discontinued. The thing I've found about ODP.NET is how picky it is with data types. Where JDBC and other ADO providers just convert and make things work, ODP.NET will throw an invalid cast exception unless you get it exactly right. Consider this code: String strSQL = "SELECT DOCUMENT_SEQ.NEXTVAL FROM DUAL"; OracleCommand cmd = new OracleCommand(strSQL, conn); reader = cmd.ExecuteReader(); if (reader != null && reader.Read()) { Int64 id = reader.GetInt64(0); return id; } Due to ODP.NET's pickiness on conversion, this doesn't work. My usual options are: 1) Retrieve into a Decimal and return it with a cast to an Int64 (I don't like this because Decimal is just overkill, and at least once I remember reading it was deprecated...) Decimal id = reader.GetDecimal(0); return (Int64)id; 2) Or cast in the SQL statement to make sure it fits into Int64, like NUMBER(18) String strSQL = "SELECT CAST(DOCUMENT_SEQ.NEXTVAL AS NUMBER(18)) FROM DUAL"; I do (2), because I feel its just not clean pulling a number into a .NET Decimal when my domain types are Int32 or Int64. Other providers I've used are nice (smart) enough to do the conversion on the fly. Any suggestions from the ODP.NET gurus?

    Read the article

  • Using stdint.h and ANSI printf?

    - by nn
    Hi, I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying. I will be using some C99 functionality, but trying to conform to ANSI C. Currently I have the following in my bignum library: #include <stdint.h> #if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64) typedef uint64_t u_w; typedef uint32_t u_hw; #define BIGNUM_DIGITS 2048 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT32_MAX #define U_HW_MIN UINT32_MIN #define U_W_MAX UINT64_MAX #define U_W_MIN UINT64_MIN #else typedef uint32_t u_w; typedef uint16_t u_hw; #define BIGNUM_DIGITS 4096 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT16_MAX #define U_HW_MIN UINT16_MIN #define U_W_MAX UINT32_MAX #define U_W_MIN UINT32_MIN #endif typedef struct bn { int sign; int n_digits; // #digits should exclude carry (digits = limbs) int carry; u_hw tab[BIGNUM_DIGITS]; } bn; As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal. The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • Error while using JSFUnit/HtmlUnit/CSSParser

    - by brianf
    We've just recently converted our project to using Maven for builds and dependency management, and after the conversion I'm getting the following exception while trying to run any JSFUnit tests in my project. Exception class=[java.lang.UnsupportedOperationException] com.gargoylesoftware.htmlunit.ScriptException: CSSRule com.steadystate.css.dom.CSSCharsetRuleImpl is not yet supported. at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:527) at net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:537) ... All the dependencies and JARs for JSFUnit were pulled with Maven using the JBoss repository (http://repository.jboss.com/maven2/). We're using the following dependencies in the project: jboss-jsfunit-core 1.2.0.Final jboss-jsfunit-richfaces 1.2.0.Final richfaces-ui 3.3.2.GA openfaces 2.0 JSF 1.2_12 Facelets 1.1.14 Before the dependencies were being managed by Maven, we were able to run our JSFUnit tests just fine. I was able to semi-fix the issue by using a ss_css2.jar file that someone had tucked into our WEB-INF/lib directory (from before the Maven conversion). I'm hoping to find out if there's something else I can do to fix the dependencies in Maven rather than resorting to managing some of the dependencies myself.

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >