Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 626/2399 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Using SQLLite transactions I/O Error

    - by james.ingham
    I currently have a client / server setup where the client sends data to the server and then the server saves the data to a SQLite database file. To do this I am using transactions which works fine in windows 7 when I run around 30 clients (each client sending data back between 5 - 30 seconds). When using the same software in Windows XP, I can get/set data multiple times with no problems until I run around 20 clients I start to get Windows Delayed wrote failed errors: This fires an exception on the server: I'm assuming this is either something to do with XP or a hardware issue on the machine i'm running XP. Does anyone have any advice to avoid this? Or if I should just catch the exception and retry saving the data? Thanks

    Read the article

  • FoxPro to C#: What best method between ODBC, OLE DB or another?

    - by Martin Labelle
    We need to read data from FoxPro 8 with C#. I'm gonna do some operations, and will push some of thoses data to an SQL Server database. We are not sure what's best method to read those data. I saw OLE DB and ODBC; what's best? REQUIRMENTS: The export program will run each night, but my company runs 24h a day. The DBF could sometimes be huge. We DON'T need to modify data. Our system, wich use FoxPro, is quite unstable: I need to find a way that ABSOLUTELY do not corrupt data, and, ideally, do not lock DBF files while reading. Speed is a minor requirement: it must be quick, but requirement #4 is most important.

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • HTML: upload-form in an other form

    - by chris
    hi! I have a little problem with an upload-form within an other form (call it data-form). I know it is not possible to put a form into an other. So I would need to put it after my data-form. But I need the upload-form controls in the middle of my data-form because of optical and structural reasons. The file-upload should also perform other actions and not the same than the data-form. So any idea how can I make the upload-form after my data-form but visible in it or any other ideas to handle this? I am using javascirpt and php also. thanks and best wishes for 2011! br,chris

    Read the article

  • Highstock set different colors for different markers

    - by user1651804
    I am tryting to develop a chart in Highstock where each marker got an individual color. When i push my Data in an Array like this : series.data.push([ parseInt(Math.round(myDate.getTime())), parseInt(Math.round(myDate.getHours())) ]); The Data is shown correctly, but when i try to set the color within each data-object like this, it doesn't work to show points. series.data.push([ { x: parseInt(Math.round(myDate.getTime())), y: parseInt(Math.round(myDate.getHours())), marker:{fillColor: 'red'}} ]); What am i missing? Update: When i inspect it with firebug i can see that the series is set correctly within the chart object. so why does it now show up? :/

    Read the article

  • JTable.setRowHeight prevents me from adding more rows

    - by Brent Parker
    I'm working on a pretty simple Java app in order to learn more about JTables, TableModels, and custom cell renderers. The table is a simple table with 8 columns only with text in them. When you click on an "add" button, a dialog pops up and lets you enter the data for the columns. Now to my problem. One of the columns (the last one) should allow for multiple lines of text. I'm already putting HTML into the field, but it is not wrapping. I did some research and looked into JTable#setRowHeight(). However, once I use setRowHeight, I can no longer add rows to the table. The data is put into the table model, but it does not show in the table. If I remove the setRowHeight line, then it adds data just fine. Is there another step to adding data to my data model that I'm missing? Thanks a lot!

    Read the article

  • Get correct output from UTF-8 stored in VarChar using Entity Framework or Linq2SQL?

    - by jasonpenny
    Borland StarTeam seems to store its data as UTF-8 encoded data in VarChar fields. I have an ASP.NET MVC site that returns some custom HTML reports using the StarTeam database, and I would like to find a better solution for getting the correct data, for a rewrite with MVC2. I tried a few things with Encoding GetBytes and GetString, but I couldn't get it to work (we use mostly Delphi at work); then I figured out a T-SQL function to return a NVarChar from UTF-8 stored in a VarChar, and created new SQL views which return the data as NVarChar, but it's slow. The actual problem appears like this: “description†instead of “description”, in both SSMS and in a webpage when using Linq2SQL Is there a way to get the proper data out of these fields using Entity Framework or Linq2SQL?

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • google url shortener api and jquery not working

    - by rahim
    i cant seem to get google's new url shortener api to work with jquery's post method: $(document).ready(function() { $.post("https://www.googleapis.com/urlshortener/v1/url", { longUrl: "http://www.google.com/"}, function(data){ console.log("data" + data); }); $('body').ajaxError(function(e, xhr, settings, exception) { $(this).text('fail'+e); console.log(exception); }); }); all of this gives me an empty (data) response AND an empty (exception) response. any ideas? ive also tried this with no success: $.ajax({ type: 'POST', url: "https://www.googleapis.com/urlshortener/v1/url", data: { longUrl: "http://www.google.com/"}, success: success, dataType: "jsonp" });

    Read the article

  • Mathematica - Import CSV and process columns?

    - by Casey
    I have a CSV file that is formatted like: 0.0023709,8.5752e-007,4.847e-008 and I would like to import it into Mathematica and then have each column separated into a list so I can do some math on the selected column. I know I can import the data with: Import["data.csv"] then I can separate the columns with this: StringSplit[data[[1, 1]], ","] which gives: {"0.0023709", "8.5752e-007", "4.847e-008"} The problem now is that I don't know how to get the data into individual lists and also Mathematica does not accept scientific notation in the form 8.5e-007. Any help in how to break the data into columns and format the scientific notation would be great. Thanks in advance.

    Read the article

  • SSIS web service task parsing result.

    - by dbengals
    I have an ssis (2005) package that uses the web service task to download to a file destination. The file contains a string of xml data. After downloaded the file looks like this. <?xml version="1.0" encoding="utf-16"?> <string>--here is XML data with escaped characters--</string> My thought was I could then use the XML source data flow source to pull the <string> data, but when I set this up the XML source will not read the <string> as a column. It will generate an xsd and it seems normal, but no luck seeing the column. Any ideas on getting this to work? Or would there be a better way to pull the data within the file generated from the web service? Thanks.

    Read the article

  • Issue with JSON and jQuery

    - by Jason N. Gaylord
    I'm calling a web service and returning the following data in JSON format: ["OrderNumber":"12345","CustomerId":"555"] In my web service success method, I'm trying to parse both: $.ajax({ type: "POST", url: "MyService.asmx/ServiceName", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { var data = msg.d; var rtn = ""; $.each(data, function(list) { rtn = rtn + this.OrderNumber + ", " + this.CustomerId + "<br/>"; } rtn = rtn + "<br/>" + data; $("#test").html(rtn); } }); but I'm getting a bunch of "undefined, undefined" rows followed by the correct JSON string. Any idea why? I've tried using the eval() method but that didn't help as I got some error message talking about ']' being expected.

    Read the article

  • Is OO design's strength in semantics or encapsulation?

    - by Phil H
    Object-oriented design (OOD) combines data and its methods. This, as far as I can see, achieves two great things: it provides encapsulation (so I don't care what data there is, only how I get values I want) and semantics (it relates the data together with names, and its methods consistently use the data as originally intended). So where does OOD's strength lie? In constrast, functional programming attributes the richness to the verbs rather than the nouns, and so both encapsulation and semantics are provided by the methods rather than the data structures. I work with a system that is on the functional end of the spectrum, and continually long for the semantics and encapsulation of OO. But I can see that OO's encapsulation can be a barrier to flexible extension of an object. So at the moment, I can see the semantics as a greater strength. Or is encapsulation the key to all worthwhile code?

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Interaction between Java and Android

    - by Grasper
    I am currently trying to research how to use Android with an existing java based system. Basically, I need to communicate to/from an Android application. The system currently passes object data from computer to computer using ActiveMQ as the JMS provider. On one of the computers is a display which shows object data to the user. What we want to do now is use a phone (running Android) as another option to show this object data to a user with wifi/network access. Ideally we would like to have a native application on the Android that would listen to the ActiveMQ topic and publish to another Topic and read/write/display the object data, but from some research I have done, I am not sure if this is possible. What are some other ways to approach this problem? The android Phone needs to be able to send/receive data. I have been using the AndroidEmulator for testing.

    Read the article

  • How the kernel gives seg. fault for a scenario like this?

    - by bala1486
    I have a doubt in accessing some invalid data. How will the OS cause segmentation fault for a scenario like this? Suppose a date segment has some 100 bytes. This will be mapped and a page table entry will be created. But the page size is 4K. Consider the data segment is aligned with this page boundary. So at first consider accessing a valid data within the 100 bytes. So now the page table entry is in TLB. Next if you try to access some invalid data between the 100 and 4K, the entry is there in page table and will it be allowed to access the invalid data??? Thanks, Bala

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • Android serialization: ImageView

    - by embo
    I have a simple class: public class Ball2 extends ImageView implements Serializable { public Ball2(Context context) { super(context); } } Serialization ok: private void saveState() throws IOException { ObjectOutputStream oos = new ObjectOutputStream(openFileOutput("data", MODE_PRIVATE)); try { Ball2 data = new Ball2(Game2.this); oos.writeObject(data); oos.flush(); } catch (Exception e) { Log.e("write error", e.getMessage(), e); } finally { oos.close(); } } But deserealization private void loadState() throws IOException { ObjectInputStream ois = new ObjectInputStream(openFileInput("data")); try { Ball2 data = (Ball2) ois.readObject(); } catch (Exception e) { Log.e("read error", e.getMessage(), e); } finally { ois.close(); } } fail with error: 03-24 21:52:43.305: ERROR/read error(1948): java.io.InvalidClassException: android.widget.ImageView; IllegalAccessException How deserialize object correctly?

    Read the article

  • What happens to date-times and booleans when using DbLinq with SQLite?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm imaging a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not so bad. If you have any experience with this or a similar scenario, what are your "lessons learned"?

    Read the article

  • Connecting to multiple firebird Databases via Delphi

    - by Branden
    I am integrating a system with 2 other applications, 1 using a Firebird database whilst the other BIS (using ADO). My delphi application uses Firebird. I need to read data from my database, insert it into both the BIS database and the other application firebird database. I have created seperate data modules for each. Sending data to the ADO works fine, but when writing to the other Firebird DB (my db still open) I get strange errors. I have managed to isolate the problem to the second firebird DB. Small data writes seems fine. The data structures are completly different, so un able to use a synch tool. is there a way to overcome this by using multi threading or seperate memory space each Firebird instance uses?

    Read the article

  • Updating a TableView with a WebService and Saving to CoreData

    - by jcady
    I am working on a project where I have a table view that is currently updated via a web request that returns XML. I implemented -(int)numberOfRowsInTableView:(NSTableView*)tv and -(id)tableView:(NSTableView *)tv objectValueForTableColumn:(NSTableColumn*)tableColumn row:(int)row in my XML parsing class, and have the table updated with the data that is pulled down from the server. I want to save the data that is pulled down using Core Data, so that the table can be saved/loaded. Then later on application start when the web request is made, it will only add data that is not already present. (The XML is sorted by release date, so later I will check to see which release dates are not loaded up from the Core Data store, and only load newer entries.) How would I go about implementing this? I am a very new Cocoa developer, but have gone through the entire Hillegass book. Thanks so much.

    Read the article

  • How to retain headers for all the pages of an exported pdf in php?

    - by udaya
    Hi I am exporting data from php page to pdf when the datas exceeed the page limit the header is not available for the consecutive pages function where i call the export to pdf is function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "pdf") { $this->load->library('table'); $this->load->plugin('to_pdf'); $data['countrytoword'] = $this->AddEditmodel1->export(); $this->table->set_heading('Country','State','Town','Name'); $out = $this->table->generate($data['countrytoword']); $html = $this->load->view( 'newpdf',$data, true); pdf_create($html, $cur_date); } } This is my view page from which i export data to pdf Name Country State Town Here I am getting the result as page:1 Name country State Town udaya india Tamilnadu kovai chandru srilanka columbo aaaaa page:2 vivek england gggkj gjgjkj in the page 2 i dont get the headers name, country ,state and town

    Read the article

  • Crystal report - Conversion from text to decimal ?

    - by 123rmn
    I have a crystal report, which takes data from an XML template. For a particular field of report, say 'Cost' the database stored procedure send data to XSD file in decimal format , but when the crystal report displays data picking from XSD, it is rounded off. When i right click on other data fields of report, I can see 'Field:table1.columnname',. But when i click on 'Cost' field, it shows 'Text:'. To my understanding, this is a text field which is mapped to pick data from XSD and since the type is text, it gives result in text hence truncating the decimal. Please suggest how can I get decimals here. P.S: This code was created by someone else, so i have no idea on what they had set at that time. I have to fix it and i have no clue about it.

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >