Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 664/2338 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • READING WEBSITE CONTENTS JAVA

    - by Sahil Manchanda
    IM DEVELOPING AN ANDROID APPLICATION WHERE in a website i PROGRAMMATICALLY submit data into search box and retrieve results by JAVA. i get the data by using URLConnect JAVA. i get the source code ie html code...... Urlconnection a = .connect to host getinputstream read data i use these functions now if the site has content like: sahil 3/5 patel chowk 965955 since these details will be inside html tags i want to extract this information . any idea

    Read the article

  • Error in glmmadmb(.....) The function maximizer failed (couldn't find STD file)

    - by Joe King
    This works fine: fit.mc1 <-MCMCglmm(bull~1,random=~school,data=dt1,family="categorical", prior=list(R=list(V=1, fix=1), G=list(G1=list(V=1, nu=0))), slice=T) So does this: fit.glmer <- glmer(bull~(1|school),data=dt1,family=binomial) But now I am trying to work with the package glmmadmb and this does not work: fit.mc12 <- glmmadmb(bull~1+(1|school), data=dt1, family="binomial", mcmc=TRUE, mcmc.opts=mcmcControl(mcmc=50000)) It generates the error: Error in glmmadmb(bull~ 1 + (1 | school), data = dt1, family = "binomial", : The function maximizer failed (couldn't find STD file) In addition: Warning message: running command '<snip>\cmd.exe <snip>\glmmadmb.exe" -maxfn 500 -maxph 5 -noinit -shess -mcmc 5000 -mcsave 5 -mcmult 1' had status 1

    Read the article

  • What are people's opinions vis-a-vis my choice of authorization plugins?

    - by brad
    I'm slowly but surely putting together my first rails app (first web-app of any kind in fact - I'm not really a programmer) and it's time to set up a user registration/login system. The nature of my app is such that each user will be completely separated from each other user (except for admin roles). When users log in they will have their own unique index page looking at only their data which they and no-one else can ever see or edit. However, I may later want to add a role for a user to be able to view and edit several other user's data (e.g. a group of users may want to allow their secretary to access and edit their data but their secretary would not need any data of their own). My plan is to use authlogic to create the login system and declarative authorization to control permissions but before I embark on this fairly major and crucial task I thought I would canvas a few opinions as to whether this combo was appropriate for the tasks I envisage or whether there would be a better/simpler/faster/cheaper/awesomer option.

    Read the article

  • Returning content directly from memcache - Django / HTTP Server

    - by RadiantHex
    Hi folks, I've built a web application with Django, I'm using Memcached in order to cache data. A few views cache the whole HttpResponse objects, therefore there might be a better alternative for returning the Memcached data other than going through Django. What could be faster alternatives for returning Memcached data on HTTP Requests ? I'm trying to make the operation as fast and lightweight as possible. Help would be much appreciated! :)

    Read the article

  • Initialization vector - DES/triple-des algorithm

    - by user312373
    Hi, In order to generate the encrypted data we would need to define a Key that should suffice generating the data. But in .net DESCryptoServiceProvider requires Key and Intialisation Vector to generate the encrypted data. In this regard, I would like to know the importance & the benefit gained by defining this initialisation vector field. Is this mandatory while encryption using DES algorithm. Pls share your thoughts on the same. Regards, Balu

    Read the article

  • Implement Exception Handling in ASP.NET C# Project

    - by Shrewd Demon
    hi, I have an application that has many tiers. as in, i have... Presentation Layer (PL) - contains all the html My Codes Layer (CL) - has all my code Entity Layer (EL) - has all the container entities Business Logic Layer (BLL) - has the necessary business logic Data Logic Layer (DLL) - any logic against data Data Access Layer (DAL) - one that accesses data from the database Now i want to provide error handling in my DLL since it is responsible for executing statement like ExecureScalar and all.... And i am confused as to how to go about it...i mean do i catch the error in the DLL and throw it back to the BLL and from there throw it back to my code or what.... can any one please help me how do i implement a clean and easy error handling techinque help you be really appreciated. Thank you.

    Read the article

  • Using the stadard Java logging, is it possible to restart logs after a certain period?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Communicate between content script and options page

    - by Gaurang Tandon
    I have seen many questions already and all are about background page to content script. Summary My extension has an options page, and a content script. The content script handles the storage functionality (chrome.storage manipulation). Whenever, a user changes a setting in the options page, I want to send a message to the content script to store the new data. My code: options.js var data = "abcd"; // let data chrome.tabs.query({ active: true, currentWindow: true }, function (tabs) { chrome.tabs.sendMessage(tabs[0].id, "storeData:" + data, function(response){ console.log(response); // gives undefined :( }); }); content script js chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) { // not working }); My question: Why isn't the approach not working? Is there any other (better) approach for this procedure.

    Read the article

  • How to Import a CSV file containing multiple sections into R?

    - by PaulHurleyuk
    I want to import the contents of a csv file into R, the csv file contains multiple sections of data vertically, seperated by blank lines and asterisks. For example ******************************************************** * SAMPLE DATA ****************************************** ******************************************************** Name, DOB, Sex Rod, 1/1/1970, M Jane, 5/7/1980, F Freddy, 9.12,1965, M ******************************************************* * Income Data **************************************** ******************************************************* Name, Income Rod, 10000 Jane, 15000 Freddy, 7500 I would like to import this into R as two seperate dataframes. Currently I'm manually cutting the csv file up into smaller files, but I think I could do it using read.csv and the skip and nrows settings of read.csv, If I could work out where the secion breaks are. This gives me a logical TRUE for every blank line ifelse(readLines("DATA.csv")=="",TRUE,FALSE) I'm hoping someone has already solved this problem.

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • Recommendations for a google finance-like interactive chart control

    - by Chris Farmer
    I need some sort of interactive chart control for my .NET-based web app. I have some wide XY charts, and the user should be able to interactively scroll and zoom into a specific window on the x axis. Something that acts similar to the google finance control would be nice, but without the need for the date labels or the news event annotations. Also, I'd prefer to avoid Flash, if that's even possible. Can someone please give some recommendations of something that might come close? EDIT: the "real" google timeline visualization is for date-based data. I just have numeric data. I tried to use that control for non-date data, but it seems to always want to show a date and demands that the first data column actually be a date.

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • VBScript - image to binary

    - by countnazgul
    Hi to all, i'm not a VB programmer but i need a vbscript that convert image file (from local disk) to be converted to binary data and the passed to webservice. I realize how to pass data to webservice but i can't find how to convert the image file to binary data. I spend a lot of time to find some kind of solution but with no luck. Can somebody help me? Thanks!

    Read the article

  • How to perform group by in LINQ and get a Iqueryable or a Custom Class Object?

    - by VJ
    Here is my query - var data = Goaldata.GroupBy(c => c.GoalId).ToList(); This returns a Igrouping object and I want an Iqueryable object which I can directly query to get the data while in this case I have to loop through using a foreach() and then get the data. Is there another way to group by in LINQ which returns directly a list of Iqueryable or the a List as similar to what happens for order by in LINQ.

    Read the article

  • Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be

    - by Mikey John
    Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be done ? Here's my stored procedure: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[AddPerson] @Data AS XML AS INSERT INTO Persons (name,image_binary) SELECT rowWals.value('./@Name', 'varchar(64)') AS [Name], rowWals.value('./@ImageBinary', 'varbinary(MAX)') AS [ImageBinary] FROM @Data.nodes ('/Data/Names') as b(rowVals) SELECT SCOPE_IDENTITY() AS Id In my schema Name is of type String and ImageBinary is o type byte[].

    Read the article

  • Is marshaling/ serialization in PHP as simple as serialize($var) ?

    - by Ygam
    here's a definition of marshaling from Wikipedia: In computer science, marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another. I have always done data serialization in php via its serialize function, usually on objects or arrays. But how is wikipedia's definition of marshaling/serialization takes place in this serizalize() function?

    Read the article

  • recommendations for rich scalable internet application

    - by Wouter Roux
    I am planning to develop a web application that must perform the following actions: User authentication allow authenticated user to download data from USB device (roughly 5 meg data) upload the data from the USB device to processing server process the data and display the results to the user further requirements/restrictions: the USB driver supports windows (2000, XP, Vista, 7) the application must support IE, Firefox and Chrome the USB driver must be installed when pointing to the web application the first time the USB driver exposes functionality through exported functions in dll scalability and eye candy is important server details: Windows Server 2008 Enterprise x64, IIS 7.0, SQL server 2008 With the limited details specified: What technology would you recommend? (eg silverlight/asp.net mvc/wcf). What practices/patterns/3rd party controls would you recommend?

    Read the article

  • WCF Callback Contract Fiddler Debugging

    - by DavyMac23
    I'm trying to debug a WCF self-hosted service utilizing callbacks. Every 1/2 I make a callback to a SL3 app to display the latest changes (yes, there are tons of changes every 1/2 second). There are 3 services, one has new data every 1/2 second, one has new data every second, and another has new data every hour. I've set all of my timeouts, Send, Receive, Open and Close to 2 days 23 hours and 23 minutes, but I still get time out errors on the service side when I go to issue the callback. I'm using Fiddler and I notice that my service that has new data every hour is still showing up every 10 seconds. Fiddlier shows a Body length of -1, then 10 seconds later it changes to 0 and the response is HTTP 200, but the overall elapsed time is 10 seconds. What's going on here?

    Read the article

  • asynchronous sockets

    - by user158182
    i need to receive an acknowledgement from server when receives a data for confirming that data has received the server and how to use GetSocketOption() in asynchronous method

    Read the article

  • Handling Datetime with decimal '2010-02-14 20:18:58.313000000'

    - by AaronLS
    In SQL Server I have some textual data in varchar fields I am trying to convert to datetime's. The funny thing is this data at some point was in a datetime field, exported to flat file, and now I am reimporting it. The problem is it is in this format 2010-02-14 20:18:58.313000000 and the conversion to datetime fails. I have no idea how it ended up like this when it was originally extracted from a datetime column. Basically a table was exported to a flat file by someone else. The original table was lost. I am reimporting back from the flatfile. I could just drop the decimal but this would be like throwing out some of the data. I'd like to maintain as much precision as possible. How can I import this data from the varchar column back into a datetime column and preserve as much accuracy as possible?

    Read the article

  • Redundancy algorithm for reading noisy bitstream

    - by Tedd Hansen
    I'm reading a lossy bit stream and I need a way to recover as much usable data as possible. There can be 1's in place of 0's and 0's in palce of 1's, but accuracy is probably over 80%. A bonus would be if the algorithm could compensate for missing/too many bits as well. The source I'm reading from is analogue with noise (microphone via FFT), and the read timing could vary depending on computer speed. I remember reading about algorithms used in CD-ROM's doing this in 3? layers, so I'm guessing using several layers is a good option. I don't remember the details though, so if anyone can share some ideas that would be great! :) Edit: Added sample data Best case data: in: 0000010101000010110100101101100111000000100100101101100111000000100100001100000010000101110101001101100111000101110000001001111011001100110000001001100111011110110101011100111011000100110000001000010111 out: 0010101000010110100101101100111000000100100101101100111000000100100001100000010000101110101001101100111000101110000001001111011001100110000001001100111011110110101011100111011000100110000001000010111011 Bade case (timing is off, samples are missing): out: 00101010000101101001011011001110000001001001011011001110000001001000011000000100001011101010011011001 in: 00111101001011111110010010111111011110000010010000111000011101001101111110000110111011110111111111101 Edit2: I am able to controll the data being sent. Currently attempting to implement simple XOR checking (though it won't be enough).

    Read the article

  • #include in .h or .c / .cpp ?

    - by Louise
    Hi, When coding in either C or C++, where should I have the #include's? callback.h: #ifndef _CALLBACK_H_ #define _CALLBACK_H_ #include <sndfile.h> #include "main.h" void on_button_apply_clicked(GtkButton* button, struct user_data_s* data); void on_button_cancel_clicked(GtkButton* button, struct user_data_s* data); #endif callback.c: #include <stdlib.h> #include <math.h> #include "config.h" #include "callback.h" #include "play.h" void on_button_apply_clicked(GtkButton* button, struct user_data_s* data) { gint page; page = gtk_notebook_get_current_page(GTK_NOTEBOOK(data->notebook)); ... Should all includes be in either the .h or .c / .cpp, or both like I have done here?

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >