Search Results

Search found 59036 results on 2362 pages for 'fake data'.

Page 214/2362 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Failed to save data at the server from memcached program

    - by zahir hussain
    hi i want to know why i cant store multi dimensional(array size is more than 1000) $memcache = new Memcache; $memcache->connect('localhost', 11211) or die ("Could not connect"); the above s working correctly... the below one have error... $memcache->set('key', $sear, false, 60) or die ("Failed to save data at the server"); if the $sear is string or object array then no problem for store data at the server.. but i store multi dimensional array in memcached,,i will get the error is Failed to save data at the server thanks and advance

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Visual Studio 2010 and Silverlight - Adding Data Sources

    - by Villager
    Hello, I am interested in building a Silverlight application that uses RIA Services. I am using this video (http://live.visitmix.com/MIX10/Sessions/CL08) as an example. In that video, the presenter uses the "data sources" tab to populate the view. However, I cannot figure out how to add a data source from within Visual Studio 2010. I have a database on my local machine. This database is the sample AdventureWorks database. When I select my Silverlight application, there is a UserRegistrationContext in the data sources window. However, I cannot figure out how to add a new one that connects to my AdventureWorks database. Can somebody tell me how to do this? Thank you!

    Read the article

  • Operate on pairs of rows of a data frame

    - by lorin
    I've got a data frame in R, and I'd like to perform a calculation on all pairs of rows. Is there a simpler way to do this than using a nested for loop? To make this concrete, consider a data frame with ten rows, and I want to calculate the difference of scores between all (45) possible pairs. > data.frame(ID=1:10,Score=4*10:1) ID Score 1 1 40 2 2 36 3 3 32 4 4 28 5 5 24 6 6 20 7 7 16 8 8 12 9 9 8 10 10 4 I know I could do this calculation with a nested for loop, but is there a better (more R-ish) way to do it?

    Read the article

  • How to import and export only data of whole database in access 2007

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be duplicate values(i guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • Converting a C# DataTable instance to xml that contains HTML or binary data

    - by Wardy
    Hmmmm ... Although it works in most cases, one column has html data in it. It seems that doing this ... StringBuilder xmltarget = new StringBuilder(); XmlWriter xmlWriter = XmlWriter.Create(xmltarget); tableData.WriteXml(xmlWriter); ... doesn't identify where this html or binary data exists and wrap the data in cdata tags as it should ... Is there something i need to do to ensure the relevant checks are made and a working xml string is produced?

    Read the article

  • How to prune data set by frequency to conform to paper's description

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: Tag Informed Collaborative Filtering, by Zhen, Li and Young In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum >= 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • MySQL Insert Data Question

    - by Nano HE
    Hi, assume I already created a table in MySQL as below CREATE TABLE IF NOT EXISTS `sales` ( `id` smallint(5) unsigned NOT NULL auto_increment, `client_id` smallint(5) unsigned NOT NULL, `order_time` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `sub_total` decimal(8,2) NOT NULL, `shipping_cost` decimal(8,2) NOT NULL, `total_cost` decimal(8,2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; -- -- Dumping data for table `sales` -- If I added a new field must_fill for the current table. `must_fill` tinyint(1) unsigned NOT NULL, User can insert less than the number of fiels items to the table defaultly, just as the script of below. INSERT INTO `sales` (`id`, `client_id`, `order_time`, `sub_total`, `shipping_cost`, `total_cost`) VALUES (8, 12312, '2007-12-19 01:30:45', 10.75, 3.00, 13.75); It's fine. But How can I configure the field (must_fill) to a MUST INCLUDE Data field when user plan to insert into new data. BTW, The code will be integrated in PHP script.

    Read the article

  • grabing data from url

    - by Syom
    i have a task - i must grab some data from the URL. the link is http://cba.am. the data, i want to take, are in the some table, and i have the only one identifier, to reach my wanted data, it's the word "usd", which writes in that table(html)! i've written the following script, and it works! but i never heard how more experienced programers do such things, so i want to hear your comments. here is script <?php $str = file_get_contents("http://cba.am/"); $key_usd = "USD"; $sourse_usd_1 = explode($key_usd,$str); $usd1 = $sourse_usd_1[2]; $sourse_usd_2=explode(">",$usd1); $usd2 = $sourse_usd_2[4]; $sourse_usd_3=explode("<",$usd2); $usd = $sourse_usd_3[0]; ?> sorry for poor english:)

    Read the article

  • Data loss between conversion

    - by Alex Brooks
    Why is it that I loose data between the conversions below even though both types take up the same amount of space? If the conversion was done bitwise, it should be true that x = z unless data is being stripped during the conversion, right? Is there a way to do the two conversions without losing data (i.e. so that x = z)? main.cpp: #include <stdio.h> #include <stdint.h> int main() { double x = 5.5; uint64_t y = static_cast<uint64_t>(x); double z = static_cast<double>(y) // Desire : z = 5.5; printf("Size of double: %lu\nSize of uint64_t: %lu\n", sizeof(double), sizeof(uint64_t)); printf("%f\n%lu\n%f\n", x, y, z); } Results: Size of double: 8 Size of uint64_t: 8 5.500000 5 5.000000

    Read the article

  • cron job for updating user profile data imported via facebook connect

    - by Abidoon Nadeem
    I want to write a cron job for updating user profile data on my website that I pull for users that register via facebook connect on my website. The objective is to keep their profile data on my website in sync with their profile data on facebook. So if a user updates their profile picture on facebook. I want to update his profile picture on my website as well via a cron job which will run every 24 hours. I wanted to know if this is possible and secondly if this is in violation of facebook privacy policy. Based on my research it seems doable but I wanted to know if anyone has already done something like this before. It would really help.

    Read the article

  • How do you store sets in Cassandra?

    - by Ben W
    I'd like to convert this JSON to a data model in Cassandra, where each of the arrays is a set with no duplicates: var data = { "data1": { "100": [1, 2, 3], "200": [3, 4] }, "data2": { "k1", [1], "k2", [4, 5] } } I'd like to query like this: data["data1"]["100"] to retrieve the sets. Anyone know how you might model this in Cassandra? (The only thing I came up with was columns whose name was a set value and the value of the column was an empty string, but that felt wrong.) It's not OK to serialize the sets as JSON or some other string, which would make this much easier. Also, I should note that it's OK to split data1 and data2 into separate ColumnFamilies, it's not necessary that they're keys in the same one.

    Read the article

  • Getting data from a ListView into an array

    - by Joe
    I have a ListView control on my form set up like this in details mode: What I would like to do, is get all the values of the data cells when the user presses the Delete booking button. So using the above example, my array would be filled with this data: values(0) = "asd" values(1) = "BS1" values(2) = "asd" values(3) = "21/04/2010" values(4) = "2" This is my code so far: Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click Dim items As ListView.SelectedListViewItemCollection = _ Me.ManageList.SelectedItems Dim item As ListViewItem Dim values(0 To 4) As String Dim i As Integer = 0 For Each item In items values(i) = item.SubItems(i).Text i = i + 1 Next End Sub But only values(0) gets filled with the first data cell of the selection (in this case "asd") and the rest of the array entries are blank. I have confirmed this with a breakpoint and checked the array entries myself. I'm REALLY lost on this, and have been trying for about an hour now. Any help would be appreciated, thanks :) Also please note that there can only be one row selection at once. Thanks.

    Read the article

  • MySQL load data null values

    - by SP1
    Hello, I have a file that can contain from 3 to 4 columns of numerical values which are separated by comma. Empty fields are defined with the exception when they are at the end of the row: 1,2,3,4,5 1,2,3,,5 1,2,3 The following table was created in MySQL: +-------+--------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------+------+-----+---------+-------+ | one | int(1) | YES | | NULL | | | two | int(1) | YES | | NULL | | | three | int(1) | YES | | NULL | | | four | int(1) | YES | | NULL | | | five | int(1) | YES | | NULL | | +-------+--------+------+-----+---------+-------+ I am trying to load the data using MySQL LOAD command: load data infile '/tmp/testdata.txt' into table moo fields terminated by "," lines terminated by "\n"; The resulting table: +------+------+-------+------+------+ | one | two | three | four | five | +------+------+-------+------+------+ | 1 | 2 | 3 | 4 | 5 | | 1 | 2 | 3 | 0 | 0 | | 1 | 2 | 3 | NULL | NULL | +------+------+-------+------+------+ The problem lies with the fact that when a field is empty in the raw data and is not defined, MySQL for some reason does not use the columns default value (which is NULL) and uses zero. NULL is used correctly when the field is missing alltogether. Unfortunately, I have to be able to distinguish between NULL and 0 at this stage so any help would be appreciated. Thanks S.

    Read the article

  • What data/service is where?

    - by MrTelly
    What management tools (open source or otherwise) are there to track the location of data, the services that deliver/use that data and the services themselves. If you believe the snake oil a combination of DB, ESB and SOA will deliver anything anywhere, but how do you know what's where. BTW I'm not interested at the WSDL level, I'm thinking of a tool that the users/BA community would populate and use. A combination of SOA and Database is now the bedrock of most applications, however what used to be called Data Dictionaries, and would now be Service Catalogues? or MetaData repositories still seem to live in purely DataCentric world.

    Read the article

  • ORM on iPhone. More simple than CoreData.

    - by Alexander Babaev
    The question is rather simple. I know that there is SQLite. There is Core Data also. But I need something in between. More object-oriented than SQLite API and simplier than Core Data. Main points are: I need access to stored entities only by id. No queries required. I need to store items of a single type, it means that I can use only one table if I choose SQLite. I want automatic object-relational conversion. Or object-storage if the storage is not relational. I can use object archiving, but I have to implement things (NSArchiver). But I want to write some kind of class and get persistence automatically. As it can be done with Hibernate/ActiveRecord/Core Data/etc. Thanks.

    Read the article

  • Jquery .Ajax error when trying to POST data in ASP.NET MVC

    - by GB
    I am unable to access an action in my controller using .ajax. The code works on my development machine but as soon as I place it on the server it gives the error 401 Unauthorized. Here is a snippet of the code in the .aspx file... var encoded = $.toJSON(courseItem); $.ajax({ url: '<%= Url.Action("ViewCourseByID", "Home") %>/', type: "POST", dataType: 'json', data: encoded, //contentType: "application/json; charset=utf-8", success: function(result) { Update: The only time this doesn't work is when I pass json data to the Ajax call, it works fine with HTML data.

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • Buy or build tool for Data Reporting ?

    - by Manoj
    We have been asked to provide a data reporting solution. The followng are the requirements: i. The client has a lot of data which is generated everyday as an outcome of the tests they run. These tests are run at several sites and they get automatically backed up into a central server. ii. They already have perl scripts which post process them and generates excel based reports. iii. They need a web based interface for comparing those reports and they need to mark and track issues which might be present in those data. I am confused if we should build our own tool for this or we should go for already exiting tool(any suggestions?). Can you please provide supportive arguments for the decision that you would suggest?

    Read the article

  • Returning user data for forms that have errors in when using ModelForms

    - by Sevenearths
    forms.py from django.forms import ModelForm from client.models import ClientDetails, ClientAddress, ClientPhone from snippets.UKPhoneNumberForm import UKPhoneNumberField class ClientDetailsForm(ModelForm): class Meta: model = ClientDetails class ClientAddressForm(ModelForm): class Meta: model = ClientAddress class ClientPhoneForm(ModelForm): number = UKPhoneNumberField() class Meta: model = ClientPhone views.py from django.shortcuts import render_to_response, redirect from django.template import RequestContext from client.forms import ClientDetailsForm, ClientAddressForm, ClientPhoneForm def new_client_view(request): formDetails = ClientDetailsForm(initial={'marital_status':'u'}) formAddress = ClientAddressForm() formHomePhone = ClientPhoneForm(initial={'phone_type':'home'}) formWorkPhone = ClientPhoneForm(initial={'phone_type':'work'}) formMobilePhone = ClientPhoneForm(initial={'phone_type':'mobi'}) return render_to_response('client/new_client.html', {'formDetails': formDetails, 'formAddress': formAddress, 'formHomePhone': formHomePhone, 'formWorkPhone': formWorkPhone, 'formMobilePhone': formMobilePhone}, context_instance=RequestContext(request)) (the new_client.html is nothing special) How should I write views.py so that if the user's data raises an error, instead of showing them the form again with the errors in but none of their original data, it shows them the form again with the errors AND their original data?

    Read the article

  • "Mail merge"-like functionality in Dreamweaver, or in any other web editing tool?

    - by Chris Farmer
    I have inherited several related, low-traffic web sites to manage and edit. These sites are implemented with static html, and they've accrued lots of stray tags and other cruft. I want to try to clean these up and migrate them to some common page template framework to simplify design and data changes and improve overall consistency. The pages will change on the timescale of weeks, and since the current web hosting plan does not support any dynamic server technologies, I was hoping to just use Dreamweaver or some other tool to merge my content data with some templating structure. I'd like to do content updates every several days and then run the content back through my templates, resulting in new static html that I can upload to the host. Do any tools support this kind of poor-man's data-driven web application? Are there better ways to approach this problem, aside from moving to a new hosting plan and using ASP.NET or PHP?

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • JQuery Validation using Remote posts empty data to webservice

    - by user319721
    I'm using the JQuery Validation plugin. I'm using the remote option to make a call to my webservice to check if a company name exists. The webservice only accepts JSON data. I pass the data to the webservice from the Company Input Field in my Form as follows: data: "{'company': '" + $('#Company').val() + "'}" But this always returns a blank value for company so the response is {'company':''} i.e. correct JSON but missing the Company Input Field value. Can anyone shed some light on why I always get a blank value here? Thanks for the help, Ciaran

    Read the article

  • python fdb save huge data from database to file

    - by peter
    I have this script SELECT = """ select coalesce (p.ID,'') as id, coalesce (p.name,'') as name, from TABLE as p """ self.cur.execute(SELECT) for row in self.cur.itermap(): xml +=" <item>\n" xml +=" <id>" + id + "</id>\n" xml +=" <name>" + name + "</name>\n" xml +=" </item>\n\n" #save xml to file here f = open... and I need to save data from huge database to file. There are 10 000s (up to 40000) of items in my database and it takes very long time when script runs (1 hour and more) until finish. How can I take data I need from database and save it to file "at once"? (as quick as possible? I don't need xml output because I can process data from output on my server later. I just need to do it as quickly as possible. Any idea?) Many thanks!

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >