Search Results

Search found 60920 results on 2437 pages for 'data professional'.

Page 47/2437 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • mysql data type confusion

    - by zen
    So this is more of a generalized question about MySQLs data types. I'd like to store a 5-digit US zip code (zip_code) properly in this example. A county has 10 different cities and 5 different zip codes. city | zip code -------+---------- city 0 | 33333 city 1 | 11111 city 2 | 22222 city 3 | 33333 city 4 | 44444 city 5 | 55555 city 6 | 33333 city 7 | 33333 city 8 | 44444 city 9 | 22222 I would typically structure a table like this as varchar(50), int(5) and not think twice about it. (1) If we wanted to ensure that this table had only one of 5 different zip codes we should use the enum data type, right? Now think of a similar scenario on a much larger scale. In a state, there are five-hundred cities with 418 different zip codes. (2) Should I store 418 zip codes as an enum data type OR as an int and create another table to reference?

    Read the article

  • Can you Export/Import Flex (4) Data Services?

    - by mkraken
    Flex newb here. I'm working in flashbuilder 4 (flex4?), and am being asked to create the client-side data services integration 'layer' in a flex app. There is another team working on the actual UI/Presentation. Both parts must be deployed in a single swf. If I use the data/services wizard to build out my service connections (and generate the ActionScript), is it possible to export these 'connections' so that they can easily be imported into another project? Or must they be defined through the wizard all over again? The other team wants to be able to see the connections appear in the new project's Data/Services inspector (IDE Tab). Thanks!

    Read the article

  • Managing changes in memory-based data format

    - by kamziro
    So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out. However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you distribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C. I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons. What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)

    Read the article

  • Error in Using Dynamic Data Entities WebSite in VS2012

    - by amin behzadi
    I decided to use Dynamic Data Entities website in vs2012. So, I created this website,then added App_Code directory and added a new edmx to it and named it myDB.edmx. After that I uncommented the code line in global.asax which registers the entity context : DefaultModel.RegisterContext(typeof(myDBEntities), new ContextConfiguration() { ScaffoldAllTables = true }); But when I run the website this error occurs : The context type 'myDBEntities' is not supported. how can I fix it? p.s: You now there are some differences between using L2S by Dynamic Data L2S website AND using entity framework by Dynamic Data Entities website.

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • Problems persisting Core Data structures on iPhone/iPad

    - by Rivier
    I have an iPhone/iPad app using Core Data to keep my application data. Sometimes, even though I don't get any error messages, the data is not really saved so when the app starts anew, it's all gone. This problem seems to disappear after physically rebooting the device, but otherwise it's pretty random and hard to track. Has anyone seen a similar issue? Also, it seems to happen more often in the iPhone 1st generation, less so in the 3G/3GS, and seldom in the iPad. Very strange...

    Read the article

  • Program-wide data, C++

    - by bobobobo
    I'd like to make program-wide data in a C++ program. The easiest way to do it in C# is just public static members. C#: public static class DataContainer { public static Object data1 ; public static Object data2 ; } In C++ you can do the same thing C++ global data way#1: class DataContainer { public: static Object data1 ; static Object data2 ; } ; Object DataContainer::data1 ; Object DataContainer::data2 ; However there's also extern C++ global data way #2: class DataContainer { public: Object data1 ; Object data2 ; } ; extern DataContainer * dataContainer ; // instantiate in .cpp file Which is better, or possibly another way which I haven't thought about?

    Read the article

  • R: Converting a list of data frames into one data frame

    - by JD Long
    I have code that at one place ends up with a list of data frames which I really want to convert to a single big data frame. I got some pointers from an earlier question which was trying to do something similar but more complex. Here's an example of what I am starting with (this is grossly simplified for illustration): listOfDataFrames <- NULL for (i in 1:100) { listOfDataFrames[[i]] <- data.frame(a=sample(letters, 500, rep=T), b=rnorm(500), c=rnorm(500)) } I am currently using this: df <- do.call("rbind", listOfDataFrames) *EDIT* whoops. In my haste to implement what I had "learned" in a previous question I totally screwed up. Yes, the unlist() is just plain wrong. I'm editing that out of the question above.

    Read the article

  • Performing multiple fetches in Core Data within the same view

    - by yesimarobot
    I have my CD store setup and everything is working. Once my initial fetch is performed, I need to perform several fetches based on calculations using the data from my first fetch. The examples provided from Apple are great, and helped me get everything going but I'm struggling with executing successive fetches. Any suggestions, or tutorial links are appreciated. Table View loads data from CD store. When a user taps a row it pushes a detail view The detail view loads details from CD. [THE ABOVE STEPS ARE ALL WORKING] I perform several calculations on the data fetched in the detail view. I need to then perform several other fetches based on the results of my calculations.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • How to reduce the number of points in (x,y) data

    - by Gowtham
    I have a set of data points: (x1, y1) (x2, y2) (x3, y3) ... (xn, yn) The number of sample points can be thousands. I want to represent the same curve as accurately as possible with minimal (lets suppose 30) set of points. I want to capture as many inflection points as possible. However, I have a hard limit on the number of allowed points to represent the data. What is the best algorithm to achieve the same? Is there any free software library that can help? PS: I have tried to implement relative slope difference based point elimination, but this does not always result in the best possible data representation. Thanks for your time. -Gowtham

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Data frame linear fit in R

    - by user1247384
    This is perhaps a simple question, but I am n00b.Say I have a data frame with a bunch of columns. I need to call lm function over the column 1 and 2, 1 and 3, and so on. So basically I need to loop over all columns and store the results of the fit as I build the model. The problem I am running into is that lm(df[1]~df[2], data = df) doesnt work. In this case df is the data frame object and df[1] is the first column. What is a good way to do this in a loop, as in access the columns of df in an iterative fashion. Thanks.

    Read the article

  • tsql sum data and include default values for missing data

    - by markpirvine
    Hi, I would like a query that will shouw a sum of columns with a default value for missing data. For example assume I have a table as follows: type_lookup: id name 1 self 2 manager 3 peer And a table as follows data: id type_lookup_id value 1 1 1 2 1 4 3 2 9 4 2 1 5 2 9 6 1 5 7 2 6 8 1 2 9 1 1 After running a query I would like a result set as follows: type_lookup_id value 1 13 2 25 3 0 I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table. Any help would be greatly appreciated, Mark

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • jquery ajax form function(data)

    - by RussP
    Can some one please tell me where I have gone wrong. What ever I do I get the answer "no" JQuery to send data to php query $j.post("logincheck.php",{ username:$j('#username').attr('value'), password:$j('#password').attr('value'), rand:Math.random() } , function(data) { if(data=='yes') {alert('yes');} else {alert('no');} } ); Here is the php query if(isset($_POST['username'])): $username = $_POST['username']; $password = $_POST['password']; $posts = mysql_query("SELECT * FROM users WHERE username='$username'"); $no_rows = mysql_num_rows($posts ); while($row = mysql_fetch_array($posts)): print 'yes'; endwhile; else: print 'no'; //header('location: index.php'); endif; endif; Thank in adance

    Read the article

  • What frameworks exist for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view. In the client side code I would like the users list to be updated by the framework itself, raising changed events such that it will be redrawn - similar to 'cells' or dataflow. I am looking specifically for a .NET or java implementation.

    Read the article

  • JavaScript: how to use data but to hide it so as it cannot be reused

    - by loukote
    Hi all. I've some data that i'd like to publish just on one website, ie. it should not be reused on other websites. The data is a set of numbers that change every day, our journalists work to get hard gather it. Is there any way to hide, crypt, etc. the data in a way that it cannot be reused by others? But to show it in a graph in the same time? I found the ASCII to HEX tool that could be used for (http://utenti.multimania.it/ascii2hex/). I wonder if you can suggest other ways. (Even if I have to completely change the strategy.) Many thanks!

    Read the article

  • Storing data with a stand-alone C++ application

    - by Mike
    I work with Apache, PHP, and MySQL for web development and local applications. For the past couple of years I have slowly been learning C++ and want to build an application this summer. Specifically, I want to make a "library" application in which I can store information about the books, CDs, and records that I own. I know this type of app exists, but I want to learn C++ and this seems like a good way to go about it. Here are a few questions: Is it possible to create a stand-alone application that does not require a database for storing data? If the answer to #1 above is "yes", is it a good idea to do this for an application that could potentially need to manage a lot of data? What data-storage options would you recommend for use with a C++ application? Thanks!

    Read the article

  • Best data recovery tools?

    - by Nonick
    So due to a recent act of stupidity and bravado, I uttered the words "backups! who needs backups?!" and what followed was the tragic loss of 260gb of data. This scenario in particular is requiring me to recover a repartitioned hard disk, but I was wondering what tools people here use in general to recover lost data. I'm sure everyone has been there, either accidentally rewriting files, resaving an old version, computer crash, hard disk death, user deletes an important document etc. So was thinking it might be an interesting point of discussion as to what you guys use to recover lost data. I appologise if this is considered irrelevant, but considering there has been a few recovery questions, I think this might be interesting.

    Read the article

  • Selecting data from mysql table and related data from another to join them

    - by knittledan
    Ive looked at other questions and answers but still dont understand which brings me here. I have one data base two tables. lets say table1 and table2 from database. I'm looking to grab all the information from table1 and only one column from table2 that coincides with the correct row in table1. Example which I know is wrong: SELECT table1.*, table2.time_stamp FROM table1, table2 WHERE table1.ticket_id=$var AND table1.user_id = table2.user_id Basically select data from table1 then use a value from the selected table to grab the related data from table2 and join them to output them as one mysql_query. Im sure its simple and has been asked before. edit: I dont receive an error. SQL just returns noting. log form of this would be: $sqlResults = mysql_query("SELECT table1.* FROM table1 WHERE table1.ticket_id=$var") while($rowResult = mysql_fetch_array( $sqlResults )) { $userID = $rowResult['user_id']; $sqlResults2 = mysql_query("SELECT table2.time_stamp FROM table2 WHERE table2.user_id=$userID") } I want to combine that into one sql statement so i dont have to hit table2 for every row table1 has

    Read the article

  • Ordered Data Structure that allows to efficiently remove duplicate items

    - by devoured elysium
    I need a data structure that Must be ordered (adding elements a, b and c to an empty structure, will make them be at positions 0, 1 and 2). Allows to add repeated items. This is, I can have a list with a, b, c, a, b. Allows removing all ocurrences of a given item (if I do something like delete(1), it will delete all ocurrences of 1 in the structure). I can't really pick what the best data structure could be in here. I thought at first about something like a List(the problem is having an O(n) operation when removing items), but maybe I'm missing something? What about trees/heaps? Hashtables/maps? I'll have to assume I'll do as much adding as removing with this data structure. Thanks

    Read the article

  • Importing data from many excel workbooks and sheets into a single workbook/table

    - by Max Rusalen
    Hi, I have 54 excel files with three sheets each, each sheet has a different amount of data entries but they are set out in a identical format, and I need to import the data from those sheets into a single workbook using VBA. Is there any way I can program it so I can build the loops to import the data, but without having to write in each workbook name for each loop/sheet? I think I can use the call function, but I don't know how to make the loop codes independent of the workbook name they apply to. Thank you so much in advance, Millie

    Read the article

  • Data access layer design

    - by Sam
    I have a web app and a console application accessing a db. The db has 2 tables (A, B) one of which (A) is specific to the web app. When writing a data access layer, what is the best way to do it? Technically data access layer should provide access to all the data accessible. In doing so, methods to interact with A are exposed to the console application if we have single access layer. Does creating 2 access layers to 2 table in the same database makes any sense? What is a good way to do it?

    Read the article

  • How to insert data in mysql data base table

    - by user1289538
    I am inserting data in MySQL data base but in field it does not insert data. I am using following code $providernpi=$_POST['ProviderNPI']; $patienid=$_POST['PatientID']; $fileurl=$_POST['FileURL']; $filetype=$_POST['FileTYPE']; $datasynid=$_POST['DataSynID']; $appointmentlistingsid=$_POST ['AppointmentListingsID']; $query=("INSERT INTO AppointmentDataSync (ProviderNPI,PatientID, FileURL,FileType,DataSyncID,AppointmentListingsID) VALUES ('$providernpi', '$patientid','$fileurl','$filetype','$datasynid','$appointmentlistingid')"); mysql_query($query,$con); printf("Records inserted: %d\n", mysql_affected_rows()); echo($patienid) ?>

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >