Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 48/2605 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • what is a data serialization system?

    - by Yang
    according to Apache AVRO project, "Avro is a serialization system". By saying data serialization system, does it mean that avro is a product or api? also, I am not quit sure about what a data serialization system is? for now, my understanding is that it is a protocol that defines how data object is passed over the network. Can anyone help explain it in an intuitive way that it is easier for people with limited distributed computing background to understand? Thanks in advance!

    Read the article

  • Help Reading Binary Image Data from SQL Server into PHP

    - by Joe Majewski
    I cannot seem to figure out a way to read binary data from SQL server into PHP. I am working on a project where I need to be able to store the image directly in the SQL table, not on the file system. Currently, I have been using a query like this one: INSERT INTO myTable(Document) SELECT * FROM OPENROWSET(BULK N'C:\image.jpg', SINGLE_BLOB) as BLAH This works fine to actually insert the image into the table, but I haven't yet figured a way to retrieve it and get my image back. I am doing this with PHP, and ultimately will have to make a stored procedure out of it, but can anyone enlighten me on a way to get that binary data (varbinary(MAX)) and generate an image on the fly. I expected it to be simple to use a SELECT statement and add a content-type to the headers that indicated it was an image, but it's simply not working. Instead, the page will just display the name of the file, which I have encountered in the past and understand it to be an error with the image data.

    Read the article

  • iPhone: Best Method for Passing Data to and from a Server

    - by SAPNA
    I am developing an iPhone application that downloads data from a website. The website database is implemented in SQL and the site itself uses the classic ASP interface. I am unsure as to which method would be best for transferring data to and from the server. Both JSON and SOAP require XML processing and I'm not sure how that affects performance or which of those two is best. What would be the best method in general for data transfer given the server configuration we currently have? I very new to this field and I'm a bit confused. Any help would be appreciated.

    Read the article

  • Selecting Entity Data Model Laguage -- Visual C# source file generated even when i select VB

    - by Nickson
    Am adding an Entity Data Model to an ASP.NET website. When i Add New Item to the website and select ADO.NET Entity Data Model, am asked for the model name and language. I go a head and select Visual Basic as the language, the model is added and the site can compile with out any issues. however, the model it adds a ModelName.Designer.cs source file, instead of a ModelName.Designer.vb source file. am thinking this is strange as its happening with only one of my website. my other sites have .vb designer source file for their Entity Data Models. The site still compiles with out any errors but am afraid some thing is not right. any one experienced this?, is this normal behavior?

    Read the article

  • AIF or Data Migration Framework [AX 2012]

    - by Tito
    I was importing some entities to AX 2012 using AIF and consuming the web services through an C# ASP.Net application. I already made it for Customers,Vendors,Workers,Chart of Accounts and now starting General Journals. Some customization I could find a workaround using the AIF Document Service Wizard: Creating the DUNS number using a service for the DirDunsNumber table, later associating the customer with the new created DUNS Number. On the Products data migration will need a lot of customization like this. This month I heard the annoucement that there is this new framework (Data Migration Framework), still in beta version. I would like to know if the Data Migration Framework would cover all of these customizations ? What are the advantages of this new framework over AIF ?

    Read the article

  • cleaned_data() doesn't have some of the entered data

    - by SC Ghost
    I have a simple form for a user to enter in Name (CharField), Age(IntegerField), and Sex(ChoiceField). However the data that is taken from the Sex choice field is not showing up in my cleaned_data(). Using a debugger, I can clearly see that the data is being received in the correct format but as soon as I do form.cleaned_data() all sign of my choice field data is gone. Any help would be greatly appreciated. Here is the relative code: class InformationForm(forms.Form): Name = forms.CharField() Age = forms.IntegerField() Sex = forms.ChoiceField(SEX_CHOICES, required=True) def get_information(request, username): if request.method == 'GET': form = InformationForm() else: form = RelativeForm(request.POST) if form.is_valid(): relative_data = form.cleaned_data

    Read the article

  • How does jQuery .data() work?

    - by kazanaki
    My Javascript knowledge is pretty limited. Instead of asking several javascript questions I got the "message" from Stack overflow and started using jQuery right away in order to save me some time. However several times I do not undestand the "magic" behind jQuery and I would love to learn the details. I want to use .data() in my application. The examples are very helpful. I do not understand however WHERE these values are stored. I inspect the webpage with Firebug and as soon as .data() saves an object to a dom element, I do not see any change in Firebug (either HTML or Dom tabs). I tried to look at jQuery source, but it is very advanced for my Javascript knowledge and I lost myself. So the question is: Where do the values stored by jQuery.data() actually go? Can I inspect/locate/list/debug them using a tool?

    Read the article

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

  • Excel data connection - remove header row?

    - by ekoner
    The excel spreadsheet is connecting to SQL server 2005 using the connection string below: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=XXXXXX;Data Source=XXXXXX;Extended Properties="HDR=No";Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Workstation ID=XXXXXX;Use Encryption for Data=False;Tag with column collation when possible=False It then pulls data from a view into Excel. The business user wants this information without a header row. This will allow her to review then save as a "headless" csv in SAGE file format. I attempted to alter the connection string by adding HDR=No but that hasn't worked. Additionally, I can't delete the header row. Deleting the content replaces the column names with "Column 1" etc. Any ideas appreciated!

    Read the article

  • jQuery.post dynamic data callback function

    - by FFish
    I have a script that requires quite e few seconds of processing, up to about minute. The script resizes an array of images, sharpens them and finally zips them up for the user to download. Now I need some sort of progress messages. I was thinking that with jQuery's .post() method the data from the callback function would progressively update, but that doesn't seem to work. In my example I am just using a loop to simulate my script: $(document).ready(function() { $('a.loop').click(function() { $.post('loop.php', {foo:"bar"}, function(data) { $("div").html(data); }); return false; }); }); loop.php: for ($i = 0; $i <= 100; $i++) { echo $i . "<br />"; } echo "done";

    Read the article

  • WPF/.NET data access models - resource recommendations

    - by jasonk
    We're in the early design/prep phases of transferring/updating a rather large "legacy" 3 tier client-server app to a new version. We’re looking at doing WPF over Winforms as it appears to be the direction Microsoft is pushing development of the future and we’d like the maximize the life cycle/span of the apps. That said during the rewrite we’d like to make as many changes to our data access/presentation model to improve performance as much as possible up front as many. I’ve been doing some research along that vein but the vast majority of the resources I've found that discuss WPF focus only simple data tracking apps or focus on the very basics UI design/controls. The few items that even discuss data presentation are fairly elementary in depth. Are there any books/articles/recommended reading/other resources recommended for development related to large enterprise level business apps? Any “gotchas” that should/could be avoided? General advice to minimize the time underwater

    Read the article

  • CORE DATA objectId changes constantly

    - by mongeta
    Hello, I have some data that I export into an XML file and put in a remote FTP Server. I have to identified each object with a unique attribute, it doesn't matter wich is, but must be persistent always = it can never change. I don't want to create a unique attribute, sequence, serial, etc. I'm using the objectID but every time I use it a get a new reference. I know that before the object has been saved, it has a 'temporal id', but once it's saved, it gets the definitive. I'm not seeing this, never. When I export, just fetch all data and loop, and always I get a new reference: NSURL *objectID = [[personalDataObject objectID] URIRepresentation]; // some of id received for the SAME OBJECT (no changes made, saved, ...) // 61993296 // 62194624 thanks, r. edit I was using %d instead of %@, now the returned data is: x-coredata://F46F3300-8FED-4876-B0BF-E4D2A9D80913/DataEntered/p1 x-coredata://F46F3300-8FED-4876-B0BF-E4D2A9D80913/DataEntered/p2

    Read the article

  • mysql data type confusion

    - by zen
    So this is more of a generalized question about MySQLs data types. I'd like to store a 5-digit US zip code (zip_code) properly in this example. A county has 10 different cities and 5 different zip codes. city | zip code -------+---------- city 0 | 33333 city 1 | 11111 city 2 | 22222 city 3 | 33333 city 4 | 44444 city 5 | 55555 city 6 | 33333 city 7 | 33333 city 8 | 44444 city 9 | 22222 I would typically structure a table like this as varchar(50), int(5) and not think twice about it. (1) If we wanted to ensure that this table had only one of 5 different zip codes we should use the enum data type, right? Now think of a similar scenario on a much larger scale. In a state, there are five-hundred cities with 418 different zip codes. (2) Should I store 418 zip codes as an enum data type OR as an int and create another table to reference?

    Read the article

  • Can you Export/Import Flex (4) Data Services?

    - by mkraken
    Flex newb here. I'm working in flashbuilder 4 (flex4?), and am being asked to create the client-side data services integration 'layer' in a flex app. There is another team working on the actual UI/Presentation. Both parts must be deployed in a single swf. If I use the data/services wizard to build out my service connections (and generate the ActionScript), is it possible to export these 'connections' so that they can easily be imported into another project? Or must they be defined through the wizard all over again? The other team wants to be able to see the connections appear in the new project's Data/Services inspector (IDE Tab). Thanks!

    Read the article

  • Error in Using Dynamic Data Entities WebSite in VS2012

    - by amin behzadi
    I decided to use Dynamic Data Entities website in vs2012. So, I created this website,then added App_Code directory and added a new edmx to it and named it myDB.edmx. After that I uncommented the code line in global.asax which registers the entity context : DefaultModel.RegisterContext(typeof(myDBEntities), new ContextConfiguration() { ScaffoldAllTables = true }); But when I run the website this error occurs : The context type 'myDBEntities' is not supported. how can I fix it? p.s: You now there are some differences between using L2S by Dynamic Data L2S website AND using entity framework by Dynamic Data Entities website.

    Read the article

  • Managing changes in memory-based data format

    - by kamziro
    So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out. However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you distribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C. I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons. What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • Program-wide data, C++

    - by bobobobo
    I'd like to make program-wide data in a C++ program. The easiest way to do it in C# is just public static members. C#: public static class DataContainer { public static Object data1 ; public static Object data2 ; } In C++ you can do the same thing C++ global data way#1: class DataContainer { public: static Object data1 ; static Object data2 ; } ; Object DataContainer::data1 ; Object DataContainer::data2 ; However there's also extern C++ global data way #2: class DataContainer { public: Object data1 ; Object data2 ; } ; extern DataContainer * dataContainer ; // instantiate in .cpp file Which is better, or possibly another way which I haven't thought about?

    Read the article

  • Problems persisting Core Data structures on iPhone/iPad

    - by Rivier
    I have an iPhone/iPad app using Core Data to keep my application data. Sometimes, even though I don't get any error messages, the data is not really saved so when the app starts anew, it's all gone. This problem seems to disappear after physically rebooting the device, but otherwise it's pretty random and hard to track. Has anyone seen a similar issue? Also, it seems to happen more often in the iPhone 1st generation, less so in the 3G/3GS, and seldom in the iPad. Very strange...

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • R: Converting a list of data frames into one data frame

    - by JD Long
    I have code that at one place ends up with a list of data frames which I really want to convert to a single big data frame. I got some pointers from an earlier question which was trying to do something similar but more complex. Here's an example of what I am starting with (this is grossly simplified for illustration): listOfDataFrames <- NULL for (i in 1:100) { listOfDataFrames[[i]] <- data.frame(a=sample(letters, 500, rep=T), b=rnorm(500), c=rnorm(500)) } I am currently using this: df <- do.call("rbind", listOfDataFrames) *EDIT* whoops. In my haste to implement what I had "learned" in a previous question I totally screwed up. Yes, the unlist() is just plain wrong. I'm editing that out of the question above.

    Read the article

  • How to reduce the number of points in (x,y) data

    - by Gowtham
    I have a set of data points: (x1, y1) (x2, y2) (x3, y3) ... (xn, yn) The number of sample points can be thousands. I want to represent the same curve as accurately as possible with minimal (lets suppose 30) set of points. I want to capture as many inflection points as possible. However, I have a hard limit on the number of allowed points to represent the data. What is the best algorithm to achieve the same? Is there any free software library that can help? PS: I have tried to implement relative slope difference based point elimination, but this does not always result in the best possible data representation. Thanks for your time. -Gowtham

    Read the article

  • Performing multiple fetches in Core Data within the same view

    - by yesimarobot
    I have my CD store setup and everything is working. Once my initial fetch is performed, I need to perform several fetches based on calculations using the data from my first fetch. The examples provided from Apple are great, and helped me get everything going but I'm struggling with executing successive fetches. Any suggestions, or tutorial links are appreciated. Table View loads data from CD store. When a user taps a row it pushes a detail view The detail view loads details from CD. [THE ABOVE STEPS ARE ALL WORKING] I perform several calculations on the data fetched in the detail view. I need to then perform several other fetches based on the results of my calculations.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Data frame linear fit in R

    - by user1247384
    This is perhaps a simple question, but I am n00b.Say I have a data frame with a bunch of columns. I need to call lm function over the column 1 and 2, 1 and 3, and so on. So basically I need to loop over all columns and store the results of the fit as I build the model. The problem I am running into is that lm(df[1]~df[2], data = df) doesnt work. In this case df is the data frame object and df[1] is the first column. What is a good way to do this in a loop, as in access the columns of df in an iterative fashion. Thanks.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >