Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 670/2338 | < Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >

  • Client-side or server-side processing?

    - by Nick
    So, I'm new to dynamic web design (my sites have been mostly static with some PHP), and I'm trying to learn the latest technologies in web development (which seems to be AJAX), and I was wondering, if you're transferring a lot of data, is it better to construct the page on the server and "push" it to the user, or is it better to "pull" the data needed and create the HTML around it on the clientside using JavaScript? More specifically, I'm using CodeIgniter as my PHP framework, and jQuery for JavaScript, and if I wanted to display a table of data to the user (dynamically), would it be better to format the HTML using CodeIgniter (create the tables, add CSS classes to elements, etc..), or would it be better to just serve the raw data using JSON and then build it into a table with jQuery? My intuition says to do it clientside, as it would save bandwidth and the page would probably load quicker with the new JavaScript optimizations all these browsers have now, however, then the site would break for someone not using JavaScript... Thanks for the help

    Read the article

  • CRC for cross platform applications

    - by Kangkan
    I wish to use a common CRC logic in a VB.NEt or C# application as well as on a c/Linux application. I have one c/Linux application that interacts with some webservice (c#) and also a web application (VB.NET). For some data, I want to put a CRC to be added to the data itself (say a flie) from the .NET side and check for the integrity of the data (checking the CRC) on the client and also the vice versa. Can somebody please guide me?

    Read the article

  • How to join results from two different sets in LINQ?

    - by Inez
    Hi, I get some data about customers in my database with this method: public List<KlientViewModel> GetListOfKlientViewModel() { List<KlientViewModel> list = _klientRepository.List().Select(k => new KlientViewModel { Id = k.Id, Imie = k.Imie, Nazwisko = k.Nazwisko, Nazwa = k.Nazwa, SposobPlatnosci = k.SposobPlatnosci, }).ToList(); return list; } but also I have another method which counts value for extra field in KlientViewModel - field called 'Naleznosci'. I have another method which counts value for this field based on customers ids, it looks like this: public Dictionary<int, decimal> GetNaleznosc(List<int> klientIds) { return klientIds.ToDictionary(klientId => klientId, klientId => (from z in _zdarzenieRepository.List() from c in z.Klient.Cennik where z.TypZdarzenia == (int) TypyZdarzen.Sprzedaz && z.IdTowar == c.IdTowar && z.Sprzedaz.Data >= c.Od && (z.Sprzedaz.Data < c.Do || c.Do == null) && z.Klient.Id == klientId select z.Ilosc*(z.Kwota > 0 ? z.Kwota : c.Cena)).Sum() ?? 0); } So what I want to do is to join data from method GetNaleznosc with data generated in method GetListOfKlientViewModel. I call GetNaleznosc like this: GetNaleznosc(list.Select(k => k.Id).ToList()) but don't know what to do next.

    Read the article

  • C# method generic params parameter bug?

    - by Mike M
    Hey, I appears to me as though there is a bug/inconsistency in the C# compiler. This works fine (first method gets called): public void SomeMethod(string message, object data); public void SomeMethod(string message, params object[] data); // .... SomeMethod("woohoo", item); Yet this causes "The call is ambiguous between the following methods" error: public void SomeMethod(string message, T data); public void SomeMethod(string message, params T[] data); // .... SomeMethod("woohoo", (T)item); I could just use the dump the first method entirely, but since this is a very performance sensitive library and the first method will be used about 75% of the time, I would rather not always wrap things in an array and instantiate an iterator to go over a foreach if there is only one item. Splitting into different named methods would be messy at best IMO. Thoughts?

    Read the article

  • Caching roles in a cookie

    - by AspOnMyNet
    1) a) I assume roles are cached ( by setting Roles.CacheRolesInCookie ) only for current user? b) Besides Roles.GetRolesForUser(_currentUser), are there any other methods that will read role information for current user from role cookie and thus won’t have to connect to the data base? 2) a) I assume that normally role cookie gets updated only when current user is added or removed from the role(s) or when the cookie expires? b) Because role names can be cached apart from the data source, it is possible that changes to role management at the data source would not be reflected in the cached values. In this case, the user must close and re-open their browser to clear the cached cookie value. How exactly could role cookie get cached apart from the data source? 3) If the number of roles cached in a role cookie equals the value of Roles.MaxCachedResults, then I assume when Roles.GetRolesForUser (_currentuser) gets called, it will need to check the DB to see whether user is also a member of any additional roles? thanx

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Understanding WebRequest

    - by Nai
    I found this snippet of code here that allows you to log into a website and get the response from the logged in page. However, I'm having trouble understanding all the part of the code. I've tried my best to fill in whatever I understand so far. Hope you guys can fill in the blanks for me. Thanks string nick = "mrbean"; string password = "12345"; //this is the query data that is getting posted by the website. //the query parameters 'nick' and 'password' must match the //name of the form you're trying to log into. you can find the input names //by using firebug and inspecting the text field string postData = "nick=" + nick + "&password=" + password; // this puts the postData in a byte Array with a specific encoding //Why must the data be in a byte array? byte[] data = Encoding.ASCII.GetBytes(postData); // this basically creates the login page of the site you want to log into WebRequest request = WebRequest.Create("http://www.mrbeanandme.com/login/"); // im guessing these parameters need to be set but i dont why? request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; // this opens a stream for writing the post variables. // im not sure what a stream class does. need to do some reading into this. Stream stream = request.GetRequestStream(); // you write the postData to the website and then close the connection? stream.Write(data, 0, data.Length); stream.Close(); // this receives the response after the log in WebResponse response = request.GetResponse(); stream = response.GetResponseStream(); // i guess you need a stream reader to read a stream? StreamReader sr = new StreamReader(stream); // this outputs the code to console and terminates the program Console.WriteLine(sr.ReadToEnd()); Console.ReadLine();

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • ASP.Net Form Databound problems

    - by Jason Bitman
    I know there is a similar problem on this forum, but the solutions did not really work for me. I am populating form controls with fields from a few different data sources, and the data shows up great. I have an ImageButton controlwhich has an OnClick Event set to grab all of the data from the form. Unfortunately, when I click the button, it seems as though the page is reloading first, and THEN is executes the OnClick call. The data that was hand-entered, or hard-coded seems to be pulled fine from the controls it was entered in, but anything that was pulled from a datasource is not able to be read. Any ideas. this is the last hurdle in a project that I have been working on for 6 months. I want to go home... Thank you so much in advance. Jason

    Read the article

  • How to parse phpDoc style comment block with php?

    - by Reveller
    Please consider the following code with which I'm trying to parse only the first phpDoc style comment (noy using any other libraries) in a file (file contents put in $data variable for testing purposes): $data = " /** * @file A lot of info about this file * Could even continue on the next line * @author [email protected] * @version 2010-05-01 * @todo do stuff... */ /** * Comment bij functie bar() * @param Array met dingen */ function bar($baz) { echo $baz; } "; $data = trim(preg_replace('/\r?\n *\* */', ' ', $data)); preg_match_all('/@([a-z]+)\s+(.*?)\s*(?=$|@[a-z]+\s)/s', $data, $matches); $info = array_combine($matches[1], $matches[2]); print_r($info) This almose works, except for the fact that everything after @todo (including the bar() comment block and code) is considered the value of @todo: Array ( [file] => A lot of info about this file Could even continue on the next line [author] => [email protected] [version] => 2010-05-01 [todo] => do stuff... / /** Comment bij functie bar() [param] => Array met dingen / function bar() { echo ; } ) How does my code need to be altered so that only the first comment block is being parsed (in other words: parsing should stop after the first "*/" encountered?

    Read the article

  • How to create select SQL statement that would produce "merged" dataset from two tables(Oracle DBMS)?

    - by Roman Kagan
    Here's my original question: merging two data sets Unfortunately I omitted some intircacies, that I'd like to elaborate here. So I have two tables events_source_1 and events_source_2 tables. I have to produce the data set from those tables into resultant dataset (that I'd be able to insert into third table, but that's irrelevant). events_source_1 contain historic event data and I have to do get the most recent event (for such I'm doing the following: select event_type,b,c,max(event_date),null next_event_date from events_source_1 group by event_type,b,c,event_date,null events_source_2 contain the future event data and I have to do the following: select event_type,b,c,null event_date, next_event_date from events_source_2 where b>sysdate; How to put outer join statement to fill the void (i.e. when same event_type,b,c found from event_source_2 then next_event_date will be filled with the first date found GREATLY APPRECIATE FOR YOUR HELP IN ADVANCE.

    Read the article

  • Starting with asp.net MVC

    - by Josemalive
    Hello, Actually im doing a home page that only have an action called Index() that returns the view Index.ascx. This index page will be composed by lastest news and lastest registered users, i think that create two partial views is the best idea (this way i could use it in other views). for other hand i have a data access class that calls to database for get stuff (get last news, get last users, etc...) My question is simple, should i call to the this data access class in the Index() action of my HomeController, and add to the ViewData the data obtained? I think that this index() action shouldnt be the responsable of passing this data to the partial views, right? Could you give me a hand? im messing too much? ;-) Thanks in advance. Best Regards. Jose

    Read the article

  • Using OpenGL vertex buffers in C++.

    - by Ren
    I've loaded a Wavefront .obj file and drawn it in immediate mode, and it works fine. I'm now trying to draw the same model with a vertex buffer, but I have a question. My model data is organized in the following structures: struct Vert { double x; double y; double z; }; struct Norm { double x; double y; double z; }; struct Texcoord { double u; double v; double w; }; struct Face { unsigned int v[3]; unsigned int n[3]; unsigned int t[3]; }; struct Model { unsigned int vertNumber; unsigned int normNumber; unsigned int texcoordNumber; unsigned int faceNumber; Vert * vertArray; Norm * normArray; Texcoord * texcoordArray; Face * faceArray; }; As it is now, I don't think there is any redundant data, since multiple face structures can point to the same vertex, normal, or texture coordinate. When I make vbo's for the vertex positions, normals, and texture coordinates, and assign data to them with glBufferData, do I have to have make arrays with redundant data so that they will all have the same number of elements in the same order? I'd like to know if there is a simpler way to fill the buffers with the way I already have the model's data organized.

    Read the article

  • Search in static pages

    - by Shyju
    I have an ASP web application which has pages with static content as well as dynamic content(data from database). I want to implement a search feature in the site.I Can do this with the dynamic data easily by framing the select query based on the search keys and pull data from the tables,But i would like to know how can i implement the search with the static pages ?

    Read the article

  • Stack Overflow problem transforming with a custom xslt

    - by Flynn1179
    I've got a system that allows the user the option of providing their own XSLT to apply to some data that's been retrieved, as a means of specifying how that data should be presented. However, if the user includes code in the XSLT equivalent to: <xsl:template match="/"> <xsl:element name="data"> <xsl:apply-templates select="." /> </xsl:element> </xsl:template> this causes .NET to infinitely recurse trying to process it, and produces a stack overflow error. I need to be able to trap this before it crashes the app, as the data that's been retrieved is occasionally quite time-consuming to obtain, and the data is lost when this happens. Is there any way of doing this? I know it's theoretically possible to identify any occurrences of xsl:apply-templates with "." in the select attribute, but this isn't the only way an infinite recursion could happen, I need a way of generically trapping it.

    Read the article

  • AngularJS: download pdf file from the server

    - by Bartosz Bialecki
    I want to download a pdf file from the web server using $http. I use this code which works great, my file only is save as a html file, but when I open it it is opened as pdf but in the browser. I tested it on Chrome 36, Firefox 31 and Opera 23. This is my angularjs code (based on this code): UserService.downloadInvoice(hash).success(function (data, status, headers) { var filename, octetStreamMime = "application/octet-stream", contentType; // Get the headers headers = headers(); if (!filename) { filename = headers["x-filename"] || 'invoice.pdf'; } // Determine the content type from the header or default to "application/octet-stream" contentType = headers["content-type"] || octetStreamMime; if (navigator.msSaveBlob) { var blob = new Blob([data], { type: contentType }); navigator.msSaveBlob(blob, filename); } else { var urlCreator = window.URL || window.webkitURL || window.mozURL || window.msURL; if (urlCreator) { // Try to use a download link var link = document.createElement("a"); if ("download" in link) { // Prepare a blob URL var blob = new Blob([data], { type: contentType }); var url = urlCreator.createObjectURL(blob); $window.saveAs(blob, filename); return; link.setAttribute("href", url); link.setAttribute("download", filename); // Simulate clicking the download link var event = document.createEvent('MouseEvents'); event.initMouseEvent('click', true, true, window, 1, 0, 0, 0, 0, false, false, false, false, 0, null); link.dispatchEvent(event); } else { // Prepare a blob URL // Use application/octet-stream when using window.location to force download var blob = new Blob([data], { type: octetStreamMime }); var url = urlCreator.createObjectURL(blob); $window.location = url; } } } }).error(function (response) { $log.debug(response); }); On my server I use Laravel and this is my response: $headers = array( 'Content-Type' => $contentType, 'Content-Length' => strlen($data), 'Content-Disposition' => $contentDisposition ); return Response::make($data, 200, $headers); where $contentType is application/pdf and $contentDisposition is attachment; filename=" . basename($fileName) . '"' $filename - e.g. 59005-57123123.PDF My response headers: Cache-Control:no-cache Connection:Keep-Alive Content-Disposition:attachment; filename="159005-57123123.PDF" Content-Length:249403 Content-Type:application/pdf Date:Mon, 25 Aug 2014 15:56:43 GMT Keep-Alive:timeout=3, max=1 What am I doing wrong?

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • Advice/suggestions for my first project PHP Classes

    - by Philip
    Hi guys, Any advice is welcome! I have a very limited understanding of php classes but below is my starting point for the route I would like to take. The code is a reflection of what I see in my head and how I would like to go about business. Does my code even look ok, or am I way off base? What are your thoughts, how would you go about achieving such a task as form-validate-insertquery-sendmail-return messages and errors? Please try and keep your answers simple enough for me to digest as for me its about understanding whats going on and not just a copy/paste job. Kindest regards, Phil. Note: This is a base structure only, no complete code added. <?php //======================================= //class.logging.php //======================================== class logging { public $data = array(); public $errors = array(); function __construct() { array_pop($_POST); $this->data =($this->_logging)? is_isset(filterStr($_POST) : ''; foreach($this->data as $key=> $value) { $this->data[$key] = $value; } //print_r($this->data); de-bugging } public function is_isset($str) { if(isset($str)) ? true: false; } public function filterStr($str) { return preg_match(do somthing, $str); } public function validate_post() { try { if(!is_numeric($data['cardID'])) ? throw new Exception('CardID must be numeric!') : continue; } catch (Exception $e) { return $errors = $e->getCode(); } } public function showErrors() { foreach($errors as $error => $err) { print('<div class="notok"></div><br />'); } } public function insertQ() { $query = ""; } } //======================================= //Usercp.php //======================================== if(isset($_GET['mode'])) { $mode = $_GET['mode']; } else { $mode = 'usercp'; } switch($mode) { case 'usercp': echo 'Welcome to the User Control Panel'; break; case 'logging': require_once 'class.logging.php'; $logger = new logging(); if(isset($_POST['submit']) { if($logger->validate_post === true) { $logger->insertQ(); require_once '/scripts/PHPMailer/class.phpmailer.php'; $mailer = new PHPMailer(); $mailer->PHPMailer(); } else { echo ''.$logger->showErrors.''; } } else { echo ' <form action="'.$_SERVER['PHP_SELF'].'?mode=logging" method="post"> </form> '; } break; case 'user_logout': // do somthing break; case 'user_settings': // do somthing break; ?>

    Read the article

  • Can someone explain/annotate this Ruby snippet with comments?

    - by Ronnie
    Could someone please add comments to this code? Or, alternatively, what would be the pseudocode equivalent of this Ruby code? It seems simple enough but I just don't know enough Ruby to convert this to PHP. data = Hash.new({}) mysql_results.each { |r| data[r['year']][r['week']] = r['count'] } (year_low..year_high).each do |year| (1..52).each do |week| puts "#{year} #{week} #{data[year][week]}" end end Any help whatsoever would be really appreciated. Thanks!

    Read the article

  • How do you set the "global delimiter" in Excel using VBA (or unicorns)?

    - by DanM
    I've noticed that if I use the text-to-columns feature with comma as the delimiter, any comma-delimited data I paste into Excel after that will be automatically split into columns. This makes me think Excel must have some kind of global delimiter. If this is true, how would I set this global delimiter using Excel VBA? Is it possible to do this directly, or do I need to "trick" Excel by doing a text-to-columns on some junk data, then delete the data? My ultimate goal is to be able to paste in a bunch of data from different files using a macro, and have Excel automatically split it into columns according to the delimiter I set.

    Read the article

  • PIC C - Sending 200 values over USB, but it only sends 25 of them...

    - by Adam
    I have a PIC18F4455 microcontroller which I am trying to use to send 200 values over USB. Basically I am using a for loop and a printf statement to print the values to the usb output stream. However, when the code executes I see in my serial port monitor that it is only sending the first 25 values, then stopping. My PIC C code is below. It will send out the 25th value (and the comma), but not send anything after and not send a newline character. I'm trying to get it to send all the values, then a newline character at the end. I am sending them all as characters because I can convert them on the PC end of it. //print #3 for (i = 0; i <= 199; i++){if (data[i]=='\0' || data[i]=='\n'){data[i]++;}} for (i = 0; i < 199; i++){printf(usb_cdc_putc, "%c,", data[i]);} printf(usb_cdc_putc, "%c\n", data[199]);

    Read the article

  • Object reference not set to an instance of an object

    - by MBTHQ
    Can anyone help with the following code? I'm trying to get data from the database colum to the datagridview... I'm getting error over here "Dim sql_1 As String = "SELECT * FROM item where item_id = '" + DataGridView_stockout.CurrentCell.Value.ToString() + "'"" Private Sub DataGridView_stockout_CellMouseClick(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellMouseEventArgs) Handles DataGridView_stockout.CellMouseClick Dim i As Integer = Stock_checkDataSet1.Tables(0).Rows.Count > 0 Dim thiscur_stok As New System.Data.SqlClient.SqlConnection("Data Source=MBTHQ\SQLEXPRESS;Initial Catalog=stock_check;Integrated Security=True") ' Sql Query Dim sql_1 As String = "SELECT * FROM item where item_id = '" + DataGridView_stockout.CurrentCell.Value.ToString() + "'" ' Create Data Adapter Dim da_1 As New SqlDataAdapter(sql_1, thiscur_stok) ' Fill Dataset and Get Data Table da_1.Fill(Stock_checkDataSet1, "item") Dim dt_1 As DataTable = Stock_checkDataSet1.Tables("item") If i >= DataGridView_stockout.Rows.Count Then 'MessageBox.Show("Sorry, DataGridView_stockout doesn't any row at index " & i.ToString()) Exit Sub End If If 1 >= Stock_checkDataSet1.Tables.Count Then 'MessageBox.Show("Sorry, Stock_checkDataSet1 doesn't any table at index 1") Exit Sub End If If i >= Stock_checkDataSet1.Tables(1).Rows.Count Then 'MessageBox.Show("Sorry, Stock_checkDataSet1.Tables(1) doesn't any row at index " & i.ToString()) Exit Sub End If If Not Stock_checkDataSet1.Tables(1).Columns.Contains("os") Then 'MessageBox.Show("Sorry, Stock_checkDataSet1.Tables(1) doesn't any column named 'os'") Exit Sub End If 'DataGridView_stockout.Item("cs_stockout", i).Value = Stock_checkDataSet1.Tables(0).Rows(i).Item("os") Dim ab As String = Stock_checkDataSet1.Tables(0).Rows(i)(0).ToString() End Sub I keep on getting the error saying "Object reference not set to an instance of an object" I dont know where I'm going wrong. Help really appreciated!!

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • paypal custom confirmation page

    - by krike
    I have a deposit page where people can deposit money on their accounts, but I would like to extend this with other payment methods and also I don't want to have certain information in the form visible to people (those who now how to manipulate form data using firebug). So I want to submit the form to a confirmation page where the data is processed and where new data is added like the paypal e-mail and the IPN link that needs to be send to, and only then redirected to paypal. is there php code I can use to submit the form immediatly once the data is processed? or should this be done in the meta tag or something like that?

    Read the article

< Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >