Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 691/2338 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • display result in status after it post

    - by nisardotnet
    my question is continuation of what i have posted here i want to know how to display a result status based on my return result from Web Method here is my post code: beforeSubmit: function(data) { var myData = { "firstName": escape($('#txtFirstName').val()), "lastName": escape($('#txtLastName').val()) }; $.ajax(<br> { type: "POST", url: "VisitorWS.asmx/AddVisitors", data: JSON.stringify(myData), contentType: "application/json; charset=utf-8", dataType: "json", success: function(data) { $("#status").fadeTo(500, 1, function() { $(this).html("You are now registered!").fadeTo(5000, 0); }) }, error: function(data) { $("#status").fadeTo(5000, 1, function() { $(this).html("Failed!").fadeTo(5000, 0); }) } }); WebMethod [WebMethod] public bool AddVisitors(string firstName, string lastName) { if (firstName == "test") { return true; } return false; } so based on true or false i want to display a message on the client side.

    Read the article

  • What is best practice in converting XML to Java object?

    - by newbie
    I need to convert XML data to Java objects. What would be best practice to convert this XML data to object? Idea is to fetch data via a web service (it doesn't use WSDL, just HTTP GET queries, so I cannot use any framework) and answers are in XML. What would be best practice to handle this situation?

    Read the article

  • PHP OOP class Sensative To Counter field name !

    - by Mac Taylor
    hey guys recently i managed to write a class for my stories , and everything is fine , unless counter field that stores page's hits here is my class : class nk_posts { var $data = array(); public function _data() { global $db; $result = $db->sql_query(" SELECT bt_stories.*, bt_tags.*, bt_topics.*, group_concat(DISTINCT bt_tags.tag ) as mytags, group_concat(DISTINCT bt_topics.topicname ) as mytopics FROM bt_stories,bt_tags,bt_topics WHERE CONCAT( ' ', bt_stories.tags, ' ' ) LIKE CONCAT( '%', bt_tags.tid, '%' ) AND CONCAT( ' ', bt_stories.associated, ' ' ) LIKE CONCAT( '%', bt_topics.topicid, '%' ) AND time<=now() AND section='news' AND approved='1' GROUP BY bt_stories.sid ORDER BY bt_stories.sid DESC "); while ($this->data = $db->sql_fetchrow($result)) { $this->sid = $this->data['sid']; $this->title = $this->data['title']; $this->counter = $this->data['counter']; //------Rest of Fields ------ $this->_output(); } } public function _output() { themeindex( $this->sid, $this->title, $this->counter, //------Rest of Fields ------ ); } } problem this class can't show counter filed value , but if i change counter field name to other thing like hit , .. everything goes fine im sure its okay if i write it in normal php mysql way , but i need this to be in OOP way any comment why it's sensitive to counter field name ?!

    Read the article

  • ajax how to read out $_post variable

    - by alex
    Hi, I am trying to filter/search a database with ajax $.ajax({ type: "POST", url: "filterSearch.php", queryString: qry, success: function(data){ alert( "Data Saved: " + data ); $('#searchResult').html(data); // Fill the search results box } }); Now in filterSearch.php i have the following test codes if(isset($_POST['queryString'])) { echo "TEST"; } if($_POST['runquery']==1) { $sql = "SELECT * FROM fs_vacatures WHERE here-the-like-query?"; $msg = $sql; echo $msg; die(); } die(); But nor TEST or the $sql is return in the alert??

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • How to build a LINQ query from text at runtime?

    - by Danvil
    I have a class A { public int X; public double Y; public string Z; // and more fields/properties ... }; and a List<A> data and can build a linq query like e.g. var q = from a in data where a.X > 20 select new {a.Y, a.Z}; Then dataGridView1.DataSource = q.ToList(); displays the selection in my DataGridView. Now the question, is it possible to build the query from a text the user has entered at runtime? Like var q = QueryFromText("from a in data where a.X > 20 select new {a.Y, a.Z}"); The point being, that the user (having programming skills) can dynamically and freely select the displayed data.

    Read the article

  • WPF binding problem

    - by Lolo
    I've got problem with binding in XAML/WPF. I created Action class witch extends FrameworkElement. Each Action has list of ActionItem. The problem is that the Data/DataContext properties of ActionItem are not set, so they are always null. XAML: <my:Action DataContext="{Binding}"> <my:Action.Items> <my:ActionItem DataContext="{Binding}" Data="{Binding}" /> </my:Action.Items> </my:Action> C#: public class Action : FrameworkElement { public static readonly DependencyProperty ItemsProperty = DependencyProperty.Register("Items", typeof(IList), typeof(Action), new PropertyMetadata(null, null), null); public Action() { this.Items = new ArrayList(); this.DataContextChanged += (s, e) => MessageBox.Show("Action.DataContext"); } public IList Items { get { return (IList)this.GetValue(ItemsProperty); } set { this.SetValue(ItemsProperty, value); } } } public class ActionItem : FrameworkElement { public static readonly DependencyProperty DataProperty = DependencyProperty.Register("Data", typeof(object), typeof(ActionItem), new PropertyMetadata( null, null, (d, v) => { if (v != null) MessageBox.Show("ActionItem.Data is not null"); return v; } ), null ); public object Data { get { return this.GetValue(DataProperty); } set { this.SetValue(DataProperty, value); } } public ActionItem() { this.DataContextChanged += (s, e) => MessageBox.Show("ActionItem.DataContext"); } } Any ideas?

    Read the article

  • why no implicit conversion from pointer to reference to const pointer.

    - by user316606
    I'll illustrate my question with code: #include <iostream> void PrintInt(const unsigned char*& ptr) { int data = 0; ::memcpy(&data, ptr, sizeof(data)); // advance the pointer reference. ptr += sizeof(data); std::cout << std::hex << data << " " << std::endl; } int main(int, char**) { unsigned char buffer[] = { 0x11, 0x11, 0x11, 0x11, 0x22, 0x22, 0x22, 0x22, }; /* const */ unsigned char* ptr = buffer; PrintInt(ptr); // error C2664: ... PrintInt(ptr); // error C2664: ... return 0; } When I run this code (in VS2008) I get this: error C2664: 'PrintInt' : cannot convert parameter 1 from 'unsigned char *' to 'const unsigned char *&'. If I uncomment the "const" comment it works fine. However shouldn't pointer implicitly convert into const pointer and then reference be taken? Am I wrong in expecting this to work? Thanks!

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

  • What rules govern the copying of variables in Javascript closures?

    - by int3
    I'd just like to check my understanding of variable copying in Javascript. From what I gather, variables are passed/assigned by reference unless you explicitly tell them to create a copy with the new operator. But I'm a little uncertain when it comes to using closures. Say I have the following code: var myArray = [1, 5, 10, 15, 20]; var fnlist = []; for (var i in myArray) { var data = myArray[i]; fnlist.push(function() { var x = data; console.log(x); }); } fnlist[2](); // returns 20 I gather that this is because fnlist[2] only looks up the value of data at the point where it is invoked. So I tried an alternative tack: var myArray = [1, 5, 10, 15, 20]; var fnlist = []; for (var i in myArray) { var data = myArray[i]; fnlist.push(function() { var x = data; return function() { console.log(x); } }()); } fnlist[2](); // returns 10 So now it returns the 'correct' value. Am I right to say that it works because a function resolves all variable references to their 'constant' values when it is invoked? Or is there a better way to explain it? Any explanations / links to explanations regarding this referencing / copying business would be appreciated as well. Thanks!

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Question about how AppFabric's cache feature can be used.

    - by Kevin Buchan
    Question about how AppFabric's cache feature can be used. I apologize for asking a question that I should be able to answer from the documentation, but I have read and read and searched and cannot answer this question, which leads me to believe that I have a fundamentally flawed understanding of what AppFabric's caching capabilities are intended for. I work for a geographically disperse company. We have a particular application that was originally written as a client/server application. It’s so massive and business critical that we want to baby step converting it to a better architected solution. One of the ideas we had was to convert the app to read its data using WCF calls to a co-located web server that would cache communication with the database in the United States. The nature of the application is such that everyone will tend to be viewing the same 2000 records or so with only occasional updates and those updates will be made by a limited set of users. I was hoping that AppFabric’s cache mechanism would allow me to set up one global cache and when a user in Asia, for example, requested data that was not in the cache or was stale that the web server would read from the database in the USA, provide the data to the user, then update the cache which would propagate that data to the other web servers so that they would know not to go back to the database themselves. Can AppFabric work this way or should I just have the servers retrieve their own data from the database?

    Read the article

  • Database web application

    - by Watergaite
    How would i go about creating a php application for my web page that can extract data from my database (i currently get the data in a CSV file). id also like the user to be able to filter the data by certain parameters. can u help

    Read the article

  • Sleep Function Error In C

    - by Arman
    I have a file of data Dump, in with different timestamped data available, I get the time from timestamp and sleep my c thread for that time. But the problem is that The actual time difference is 10 second and the data which I receive at the receiving end is almost 14, 15 second delay. I am using window OS. Kindly guide me. Sorry for my week English.

    Read the article

  • Is there a way to create a WCF DataContract on a third party type?

    - by Michael Hedgpeth
    I am migrating to wcf and trying to figure out how I'm going to declare my Data Contracts properly. Some of the types I have been demoting are from a third party that I am unable to change. Are attributes the only way to explicitly declare data contracts in wcf? I know about the auto data contract functionality in 3.5, but the books I'm readin discourage that. And besides, that way assumes all state is publically available, which is oftentimes not the case.

    Read the article

  • What's the best way to store custom objects in relational database?

    - by user342610
    I have my objects with their properties. Objects could change their structure: properties may be added/removed/changed. Objects could be absolutely dropped. So object's metadata (description, classes, call them like you want :) )could be changed. The database should store objects schemas and instances of these objects. What's the best way to organise a relational database structure to store data mentioned above? Currently I see only two ways: 1) Store objects schemas in a few tables: schema general data,schema properties, possible properties types. Store instances in their tables: instance general data, a few tables - per each type from possible properties types table to store instance properties data. And so on. 2) store objects schemas like in p1 but store instances like XML files in one table: one table for general instance info and one table with instance XML. please, don't ask why/for what I need this. Just need to store custom objects and DB should work fast :)

    Read the article

  • Is there an autorelease pool in class methods?

    - by mystify
    I have an class method which generates an UIView, like this: + (UIImage*)imageWithFileName:(NSString*)imgFile { UIImage *img = nil; NSBundle *appBundle = [NSBundle mainBundle]; NSString *resourcePath = [appBundle pathForResource:imgFile ofType:nil]; if (resourcePath != nil) { NSURL *imageURL = [NSURL fileURLWithPath:resourcePath]; NSData *data = [[NSData alloc] initWithContentsOfURL:imageURL]; img = [UIImage imageWithData:data]; // should be autoreleased!! [data release]; } return img; } However, when I use this, the image data is NEVER freed. There is definitely a memory bug with this, although I didn't break any memory management rule I am aware of. My guess is that because this is a class method which gets called from instance methods, There is no active autorelease pool in place or it's one that only gets drained when I quit the app. Could that be right?

    Read the article

  • How to know if the website being scraped has changed?

    - by Lost_in_code
    I'm using PHP to scrape a website and collect some data. It's all done without using regex. I'm using php's explode() method to find particular HTML tags instead. It is possible that if the structure of the website changes (CSS, HTML), then wrong data may be collected by the scraper. So the question is - how do I know if the HTML structure has changed? How to identify this before storing any data to my database to avoid wrong data being stored.

    Read the article

  • multiple calls to realloc() seems to cause a heap corruption..

    - by Windindeed
    What's the problem with this code? It crashes every time. One time it's a failed assertion "_ASSERTE(_CrtIsValidHeapPointer(pUserData));", other times it is just a "heap corrpuption" error. Changing the buffer size affects this issue in some strange ways - sometimes it crashes on the "realloc", and other times on the "free". I have debugged this code many times, and there is nothing abnormal regarding the pointers. char buf[2000]; char *data = (char*)malloc(sizeof(buf)); unsigned int size = sizeof(buf); for (unsigned int i = 0; i < 5; ++i) { char *ptr = data + size; size += sizeof(buf); char *tmp = (char*)realloc(data, size); if (!tmp) { std::cout << "Oh no.."; break; } data = tmp; memcpy(ptr, buf, sizeof(buf)); } free(data); Thanks!

    Read the article

  • updating only date part from datetime in sql server 2000

    - by user294146
    hi Experts, I have data in the table like the following. col1 col2 col3 -------------------------------------------------------- 6/5/2010 18:05:00 6/2/2010 10:05:00 Null 6/8/2010 15:05:00 6/3/2010 10:45:00 6/5/2010 11:05:00 6/3/2010 15:05:00 Null 6/7/2010 12:05:00 6/1/2010 15:05:00 6/3/2010 10:45:00 6/1/2010 14:05:00 what my requirement is I want to update the date of there columns with single date without disturbing the time. say for example I want to update the table data with 6/1/2010 where the field data is not null. please let me know the query for updating the table data. thanks & regards, murali

    Read the article

  • jquery is getting the old values from database

    - by sansknwoledge
    hi in my jsp page i am a having a jquery area which pass the values to a servlet which returns an output of dropdownlist . then the jsp file do some updation so certain values which are in the dropdownlist should not be there while repopulating. but it is not happening. my jquery code is $("#cbocode").change(function(){ var cdid=$("#cbocode option:selected"); $.get("trnDC?caseNo=20&cdid="+cdid.text(),function(data){ $("#divinstrument").html(data); }) and the servlet code is case 20:{ //jquery call String cdid=(String) request.getParameter("cdid"); Statement st = con.createStatement(); ResultSet rs = st.executeQuery("select instrumentid from mstinstrument where codeid='" + cdid + "' and rec_Status='A' and statusid='U' and Agentid='METLAB'"); if (!rs.wasNull()){ //List data=new ArrayList(); String v="<select id=cboinstr>"; while (rs.next()) { // data.add(rs.getString("vend_code")); v += "<option>" + rs.getString("instrumentid").toString() + "</option>"; } v+="</select>"; response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.print(v); } else{ response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.print("no data found"); } where i am missing???

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • Should I create a new extension for an xml file?

    - by macleojw
    I'm working with a data model stored in XML files. I want to create some metadata for the model and store it alongside, but would like to be able to distinguish between the two. The data model is imported into some software from time to time and we don't want it to try to import the meta data files. To get round this, I've been thinking of creating a new extension for the metadata xml files (say .mdml). Is this good practice?

    Read the article

  • How much RAM used by Python dict or list?

    - by Who8MyLunch
    My problem: I am writing a simple Python tool to help me visualize my data as a function of many parameters. Each change in parameters involves a non-trivial amount of time, so I would like to cache each step's resulting imagery and supporting data in a dictionary. But then I worry that this dictionary could grow too large over time. Most of my data is in the form of Numpy arrays. My question: How would one go about computing the total number of bytes used by a Python dictionary. The dictionary itself may contain lists and other dictionaries, each of which contain data stored in Numpy arrays. Ideas?

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >