Search Results

Search found 67204 results on 2689 pages for 'import and export data fr'.

Page 114/2689 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Does software testing methodology rely on flawed data?

    - by Konrad Rudolph
    It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

    Read the article

  • Why can you reference an imported module using the importing module in python

    - by noam
    I am trying to understand why any import can be referenced using the importing module, e.g #module master.py import slave and then >>>import master >>>print master.slave gives <module 'slave' from 'C:\Documents and Settings....'> What is the purpose of the feature? I can see how it can be helpful in a package's __init__.py file, but nothing else. Is it a side effect of the fact that every import is added to the module's namespace and that the module's namespace is visible from the outside? If so, why didn't they make an exception with data imported from other modules (e.g don't show it as part of the module's namespace for other modules)?

    Read the article

  • Is data integrity possible without normalization?

    - by shuniar
    I am working on an application that requires the storage of location information such as city, state, zip code, latitude, and longitude. I would like to ensure: Location data is accurate Detroit, CA Detroit IS NOT in California Detroit, MI Detroit IS in Michigan Cities and states are spelled correctly California not Calefornia Detroit not Detriot Cities and states are named consistently Valid: CA Detroit Invalid: Cali california DET d-town The D Also, since city/zip data is not guaranteed to be static, updating this data in a normalized fashion could be difficult, whereas it could be implemented as a de facto location if it is denormalized. A couple thoughts that come to mind: A collection of reference tables that store a list of all states and the most common cities and zip codes that can grow over time. It would search the database for an exact or similar match and recommend corrections. Use some sort of service to validate the location data before it is stored in the database. Is it possible to fulfill these requirements without normalization, and if so, should I denormalize this data?

    Read the article

  • Importing a function/class from a Python module of the same name

    - by Brendan
    I have a Python package mymodule with a sub-package utils (i.e. a subdirectory which contains modules each with a function). The functions have the same name as the file/module in which they live. I would like to be able to access the functions as follows, from mymodule.utils import a_function Strangely however, sometimes I can import functions using the above notation, however other times I cannot. I have not been able to work out why though (recently, for example, I renamed a function and the file it was in and reflected this rename in the utils.__init__.py file but it no longer imported as a functions (rather as a module) in one of my scripts. The utils.__init__.py reads something like, __all__ = ['a_function', 'b_function' ...] from a_function import a_function from b_function import b_function ... mymodule.__init__.py has no reference to utils Ideas?

    Read the article

  • How to make Excel strip ALL quotes from CSV text fields

    - by Klay
    When importing a CSV file into Excel, it only strips the double-quotes from the FIRST field on the line, but leaves them on all other fields. How can I force Excel to strip the quotes from ALL strings? For instance, I have a CSV file: "text1", "text2", "numeric1", "numeric 2" "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 I import it into Excel using Data Import External Data Import Data. I specify that the fields are delimited by commas, and that the text delimiter is the double-quote character. Both the data preview and the actual Excel spreadsheet columns only strip the double-quotes from the first text field. All other text fields still have quotes around them. What's really strange is that Access is able to import this data correctly (i.e. strips quotes from every text field. Note that this is NOT a matter of internal commas or quotes or escape characters. This happens in Excel 2003 and Excel 2007.

    Read the article

  • Importing files in Python from __init__.py

    - by Federico Builes
    Suppose I have the following structure: app/ __init__.py foo/ a.py b.py c.py __init__.py a.py, b.py and c.py share some common imports (logging, os, re, etc). Is it possible to import these three or four common modules from the __init__.py file so I don't have to import them in every one of the files? Edit: My goal is to avoid having to import 5-6 modules in each file and it's not related to performance reasons.

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google I/O 2010 - Data migration in App Engine

    Google I/O 2010 - Data migration in App Engine Google I/O 2010 - Data migration in App Engine App Engine 201 Matthew Blain Learn about the App Engine bulk loader and see an example of migrating data from an external data source into the app engine datastore--and back out. Do you have data stored in a traditional, relational DB which you'd like to upload to App Engine? This session will teach you how. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 44:26 More in Science & Technology

    Read the article

  • Forbes Article on Big Data and Java Embedded Technology

    - by hinkmond
    Whoa, cool! Forbes magazine has an online article about what I've been blogging about all this time: Big Data and Java Embedded Technology, tying it all together with a big bow, connecting small devices to the data center. See: Billions of Java Embedded Devices Here's a quote: By the end of the decade we could see tens of billions of new Internet-connected devices... with billions of Internet- connected devices generating Big Data, are the next big thing. ... That’s why Oracle has put together an ecosystem of solutions for this new, Big Data-oriented device-to-data center world: secure, powerful, and adaptable embedded Java for intelligent devices, integrated middleware... This is the next big thing. Java SE Embedded Technology is something to watch for in the new year. Start developing for it now to get a head-start... Hinkmond

    Read the article

  • R: How can I use apply on rows of a data.frame and get out $column_name?

    - by John
    I'm trying to access $a using the following example: df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4)) > df a b 1 x 1 2 x 2 3 y 3 4 y 4 test_fun <- function (data.frame_in) { print (data.frame_in[1]) } I can now access $a if I use an index for the first column: apply(df, 1, test_fun) a "x" a "x" a "y" a "y" [1] "x" "x" "y" "y" But I cannot access column $a with the $ notation: error: "$ operator is invalid for atomic vectors" test_fun_2 <- function (data.frame_in) { print (data.frame_in$a) } >apply(df, 1, test_fun_2) Error in data.frame_in$a : $ operator is invalid for atomic vectors Is this not possible?

    Read the article

  • Importing long numerical identifiers into Excel

    - by Niels Basjes
    I have some data in a database that uses ids that have the form of 16 digit numbers. In some situations i need to export the data in such a way that it can be manipulated in excel. So i export the data into a file and import it into excel. I've tried several file formats and I'm stuck. The problem I'm facing is that when reading a file into excel that has a cell that looks like a number then excel treats it as a number. The catch is that (as far as i can tell) all numerical values in excel are double precision floating point which have a precision of less than 16 digits. So my ids are changed: very often the last digit its changed to a 0. So far I've only been able to convince excel to keep the Id unchanged by breaking it myself: by adding a letter or symbol to the Id. This however means that in order to use the value again it must be "unbroken". Is there a way to create a file where i can specify that excel must treat the value as a text without changing the value? Or its there a way to let excel treat the value as a long (64bit integer)?

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • Importing Data from Google Analytics

    - by Adam Tannon
    I am planning on building a web app with many different public-facing HTTP servers; each of which will have Google Analytics (GA) installed on them. I'd like to create a "dashboard" app that consolidates the GA data into one screen. I've been perusing the documentation for this so-called GA API, but I can't tell what the end result of the GA API is: Does the GA API allow me to do exactly what I am looking for it to do? Or... Does the GA API do something entirely different (like allow me to share my data with Google+ or something else weird) Since an API can be used to CRUD any kind of data, I guess I'm asking which way the GA API goes: is it for querying (reading) data from 1+ server instances, or is it for modifying data on those servers or somewhere else? Thanks in advance!

    Read the article

  • Displaying a Sorted, Paged, and Filtered Grid of Data in ASP.NET MVC

    Over the past couple of months I've authored five articles on displaying a grid of data in an ASP.NET MVC application. The first article in the series focused on simply displaying data. This was followed by articles showing how to sort, page, and filter a grid of data. We then examined how to both sort and page a single grid of data. This article looks at how to add the final piece to the puzzle: we'll see how to combine sorting, paging and filtering when displaying data in a single grid. Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • How to implement Self-host WCF data serivces (http://localhost:1234/myDataService.svc/...)

    - by warmcold
    I have a project that needs to implement WCF data services (OData) to retrieve data from a control system (.NET Framework Application). The WCF data service needs to be hosted by the .NET application (No ASP.NET and NO IIS). I have seen many WCF Data Service examples recently; they are all hosted by ASP.NET application. I also see the self-host (console application) examples, but it is for WCF Service (not WCF Data Service). Here is my question: It is possible to have a standalone .NET Applications to host WCF Data Services ((http://localhost:1234/mydataservice.svc/...). If yes, can someone provide an example? Thanks.

    Read the article

  • Writing a Data Access Layer (DAL) for SQL Server

    In this tip, I am going to show you how you can create a Data Access Layer (to store, retrieve and manage data in relational database) in ADO .NET. I will show how you can make it data provider independent, so that you don't have to re-write your data access layer if the data storage source changes and also you can reuse it in other applications that you develop. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • The Latest In Master Data Management

    Today master data continues to expand while data quality becomes more important. The challenge of clean data is not new, but the stakes and complexities are higher than ever. Fortunately, Oracle has a solution -- Oracle Master Data Management. Hear from Pascal Laik, VP Oracle MDM Product Strategy about the benefits of Master Data Management, the solutions that Oracle offers and why they are unique and what benefits customers are deriving from Oracle MDM products. Learn about the latest product in the Oracle MDM family and where Oracle MDM strategy is heading.

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • Get your picture on the screen at MIX11: Help me create a repository of sample data

    - by Laurent Bugnion
    Here is your chance to get your picture on the big screen during my MIX11 presentation in April this year. I need to create a small repository of sample data for my demos. So instead of tapping in my imagination and creating dummy users (or reusing past information I already used in other demos), I thought I would appeal to the amazing community: Send me an email with the following information. I will include the first 30 users into my sample data repository and use your info in my demo. First Name Last Name Date of birth Picture Link to Facebook profile (optional) Disclaimer: The data will only be running locally on my hard drive. The demos will however be filmed and the videos made public. By providing this information, you explicitly consent to this data being used in demos at MIX11 and possibly in following conferences. The data will only be used for demo purposes. Thanks for your help!!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Using VBA to model data in Autodesk Inventor?

    - by user108478
    I have a close friend who is using a specific device that records the dimensions of an object as it is eroded and outputs the dimensional data to an excel sheet. The object is spherical in nature but is eroded from the top and bottom, so the shape is constantly changing and a single formula for surface area and volume would not work. This is where Inventor comes in. My friend can plug the dimensional data to Inventor and it immediately returns the surface area and volume. The erosion process takes several minutes to complete and records data at very short intervals, so it would be very arduous to plug in the data thousand of time. Since Inventor supports macros and VBA, is there a way to plug the data into Inventor and output it into another spreadsheet? Any suggestions would be appreciated.

    Read the article

  • nightmare with relative imports, how does pep 366 work?

    - by pygabriel
    I have a "canonical file structure" like that (I'm giving sensible names to ease the reading): mainpack/ __main__.py __init__.py - helpers/ __init__.py path.py - network/ __init__.py clientlib.py server.py - gui/ __init__.py mainwindow.py controllers.py In this structure, for example modules contained in each package may want to access the helpers utilities through relative imports in something like: # network/clientlib.py from ..helpers.path import create_dir The program is runned "as a script" using the __main__.py file in this way: python mainpack/ Trying to follow the PEP 366 I've put in __main__.py these lines: ___package___ = "mainpack" from .network.clientlib import helloclient But when running: $ python mainpack Traceback (most recent call last): File "/usr/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "path/mainpack/__main__.py", line 2, in <module> from .network.clientlib import helloclient SystemError: Parent module 'mainpack' not loaded, cannot perform relative import What's wrong? What is the correct way to handle and effectively use relative imports? I've tried also to add the current directory to the PYTHONPATH, nothing changes.

    Read the article

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • AppEngine GeoPt Data Upload

    - by Eric Landry
    I'm writing a GAE app in Java and only using Python for the data upload. I'm trying to import a CSV file that looks like this: POSTAL_CODE_ID,PostalCode,City,Province,ProvinceCode,CityType,Latitude,Longitude 1,A0E2Z0,Monkstown,Newfoundland,NL,D,47.150300000000001,-55.299500000000002 I was able to import this file in my datastore if I import Latitude and Longitude as floats, but I'm having trouble figuring out how to import lat and lng as a GeoPt. Here is my loader.py file: import datetime from google.appengine.ext import db from google.appengine.tools import bulkloader class PostalCode(db.Model): id = db.IntegerProperty() postal_code = db.PostalAddressProperty() city = db.StringProperty() province = db.StringProperty() province_code = db.StringProperty() city_type = db.StringProperty() lat = db.FloatProperty() lng = db.FloatProperty() class PostalCodeLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'PostalCode', [('id', int), ('postal_code', str), ('city', str), ('province', str), ('province_code', str), ('city_type', str), ('lat', float), ('lng', float) ]) loaders = [PostalCodeLoader] I think that the two db.FloatProperty() lines should be replaced with a db.GeoPtProperty(), but that's where my trail ends. I'm very new to Python so any help would be greatly appreciated.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >