Search Results

Search found 1402 results on 57 pages for 'dataset'.

Page 31/57 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • XPath expression exception

    - by fftoolbar
    Hi I am not really sure if i am doing this right with XPath expression. I am trying to locate a text on DOM and this text is assingned to a variable. I have text stored in SQLite and i have retrievd the text and i am trying to locate it on the webpage which actually contains the text. so i ahve the following code: var searchText = dataset[x]['selectedText']; alert(dataset[x]['selectedText']); var res = googbar_frames[0].contentDocument.evaluate("//*[.=searchText]",googbar_frames[0].contentDocument.body,null,XPathResult.ANY_TYPE,null); alert(res.snapshotLength); And i get the following error. Error: Permission denied for <http://en.wikipedia.org> to call method XPathException.toString on <>. Error: Permission denied for <http://en.wikipedia.org> to call method XPathException.toString on <>. Have got the expression correct. I am trying to look for the text on DOM. Or am i going wrong somwehere? cheers

    Read the article

  • error while reading Excel sheet

    - by Lalit
    Hi, I have code to read Excel from c3 language : DataTable dtChildrenData = new DataTable(); OdbcConnection oConn = null; try { if (File.Exists(strSheetPath)) { oConn = new OdbcConnection(); oConn.ConnectionString = @"DSN=Excel Files;DBQ=" + strSheetPath + @";DriverId=1046;FIL=excel 12.0;MaxBufferSize=2048;PageTimeout=5;"; OdbcCommand oComm = new OdbcCommand(); oComm.Connection = oConn; oComm.CommandText = "Select * From [Sheet1$]"; DataSet ds = new DataSet(); OdbcDataAdapter oAdapter = new OdbcDataAdapter(oComm); oConn.Open(); oAdapter.Fill(ds); dtChildrenData = ds.Tables[0]; } } finally { oConn.Close(); } return dtChildrenData; But getting this error when i deploy the web application on IIS. Wherere as it is running fine locally. ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified How to solve this. Please let me know if any information required to answer this question (about configuration)

    Read the article

  • Sort command not working as expected

    - by user964689
    If anybody can help me to write a loop to iterate over files in a folder it would save me a huge amount of time. I think it must be quite a simple solution ,but I currently don't know how to nest a loop within a loop. So far I have this script: cd /folderlocation/ for i in `</textfile_containing_lines_to_iterate_through` do #size=`echo $i | perl -nE '/:([\d-]+)/ && say abs(eval $1)'` #echo "$size" zcat dataset | head -n 18 > temp"$i".vcf tabix dataset $i >> temp"$i".vcf vcftools --window-pi 1000000 --vcf temp10individuals"$i".vcf >> run_summary.txt cat out.windowed.pi >> outputfile_2 #rm temp* done grep -v "PI" outputfile_2 > outputfile rm outputfile_2 I need to expand this so that the script will run multiple times, through all of the 'textfiles_containing_lines_to_iterate_through'. Currently I change the name of the textfile manually each time and re-run the script. So I'd need a loop that does this for file in folder, and also that uses the name of the file as part of the outputfile name so that I can match an output file to an inputfile. Any help would be really useful and greatly appreciated! Many thanks in advance.

    Read the article

  • Finding the Column Index for a Specific Value

    - by Btibert3
    Hi All, I am having a brain cramp. Below is a toy dataset: df <- data.frame( id = 1:6, v1 = c("a", "a", "c", NA, "g", "h"), v2 = c("z", "y", "a", NA, "a", "g"), stringsAsFactors=F) I have a specific value that I want to find across a set of defined columns and I want to identify the position it is located in. The fields I am searching are characters and the trick is that the value I am looking for might not exist. In addition, null strings are also present in the dataset. Assuming I knew how to do this, the variable position indicates the values I would like returned. > df id v1 v2 position 1 1 a z 1 2 2 a y 1 3 3 c a 2 4 4 <NA> <NA> 99 5 5 g a 2 6 6 h g 99 The general rule is that I want to find the position of value "a", and if it is not located or if v1 is missing, then I want 99 returned. In this instance, I am searching across v1 and v2, but in reality, I have 10 different variables. It is also worth noting that the value I am searching for can only exist once across the 10 variables. What is the best way to generate this recode? Many thanks in advance.

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • xsl:for-each not supported in this context

    - by alexbf
    Hi! I have this XSLT document : <xsl:stylesheet version="1.0" xmlns:mstns="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:template match="/MyDocRootElement"> <xs:schema id="DataSet" targetNamespace="http://www.w3.org/2001/XMLSchema" attributeFormDefault="qualified" elementFormDefault="qualified" > <xs:element name="DataSet" msdata:IsDataSet="true"> <xs:complexType> <xs:choice maxOccurs="unbounded"> <xs:element name="Somename"> </xs:element> <xs:element name="OtherName"> </xs:element> <!-- FOR EACH NOT SUPPORTED? --> <xsl:for-each select="OtherElements/SubElement"> <xs:element name="OtherName"> </xs:element> </xsl:for-each> </xs:choice> </xs:complexType> </xs:element> </xs:schema> </xsl:template> </xsl:stylesheet> I have a validation error saying that the "for-each element is not supported in this context" I am guessing it has something to do with the xs namespace validation. Any ideas on how can I make this work? (Exclude validation?) Thanks Alex

    Read the article

  • read contents of a xml file into a data grid view

    - by syedsaleemss
    Im using c# .net , windows form application. I have a XML file which contains two columns and some rows of data. now i have to fill this data into a data grid view. im using a button, when i click on the button an open dialog box will appear. i have to select the xml file name and when i click on open the contents of that xml file should come to the data grid view. i have tried with the following code: { XmlDataDocument xmlDatadoc=new XmlDataDocument(); XmlDatadoc.Dataset.ReadXml(filename); ds=xmlDatadoc.Dataset; datagridview1.DataSource=ds.DefaultViewManager; datagridview1.Datamember="language"; } My xml file is: <languages> <language> <key> key1</key> <value>value1</value> <language> <language> <key> key2</key> <value>value2</value> <language> </languages> Its working fine but only for "language" . I need it to work file other xml files also.

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • Is Mapping SIMPLE data to winform control really as hard as it seems?

    - by HotOil
    Hi: I'm making a leap from MFC to WinForms. It has all gone smoothly so far; The windows/gui parts of winforms app development are making good sense to me. But. Now all I want to do is display simple data types in the controls on the form, and retrieve them from the controls when the user clicks. This is a very simple operation in MFC.. (DataExchange) but seems to be much more complicated in .NET. Binding ? DataObject ? DataSet ? no.. I don't want a dataset or records or columns or any of that. I just want to map an or a bool to a checkbox or a radiobutton.. I have looked but have not found any good examples of doing this in C++. Is it really this hard? Really? What am I missing? Thanks-

    Read the article

  • Which is faster: Appropriate data input or appropriate data structure?

    - by Anon
    I have a dataset whose columns look like this: Consumer ID | Product ID | Time Period | Product Score 1 | 1 | 1 | 2 2 | 1 | 2 | 3 and so on. As part of a program (written in C) I need to process the product scores given by all consumers for a particular product and time period combination for all possible combinations. Suppose that there are 3 products and 2 time periods. Then I need to process the product scores for all possible combinations as shown below: Product ID | Time Period 1 | 1 1 | 2 2 | 1 2 | 2 3 | 1 3 | 2 I will need to process the data along the above lines lots of times ( 10k) and the dataset is fairly large (e.g., 48k consumers, 100 products, 24 time periods etc). So speed is an issue. I came up with two ways to process the data and am wondering which is the faster approach or perhaps it does not matter much? (speed matters but not at the cost of undue maintenance/readability): Sort the data on product id and time period and then loop through the data to extract data for all possible combinations. Store the consumer ids of all consumers who provided product scores for a particular combination of product id and time period and process the data accordingly. Any thoughts? Any other way to speed up the processing? Thanks

    Read the article

  • Why does this code leak? (simple codesnippet)

    - by Ela782
    Visual Studio shows me several leaks (a few hundred lines), in total more than a few MB. I traced it down to the following "helloWorld example". The leak disappears if I comment out the H5::DataSet.getSpace() line. #include "stdafx.h" #include <iostream> #include "cpp/H5Cpp.h" int main(int argc, char *argv[]) { _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); // dump leaks at return H5::H5File myfile; try { myfile = H5::H5File("C:\\Users\\yyy\\myfile.h5", H5F_ACC_RDONLY); } catch (H5::Exception& e) { std::string msg( std::string( "Could not open HDF5 file.\n" ) + e.getCDetailMsg() ); throw msg; } H5::Group myGroup = myfile.openGroup("/so/me/group"); H5::DataSet myDS = myGroup.openDataSet("./myfloatvec"); hsize_t dims[1]; //myDS.getSpace().getSimpleExtentDims(dims, NULL); // <-- here's the leak H5::DataSpace dsp = myDS.getSpace(); // The H5::DataSpace seems to leak dsp.getSimpleExtentDims(dims, NULL); //dsp.close(); // <-- doesn't help either std::cout << "Dims: " << dims[0] << std::endl; // <-- Works as expected return 0; } Any help would be appreciated. I've been on this for hours, I hate unclean code...

    Read the article

  • Faster or more memory-efficient solution in Python for this Codejam problem.

    - by jeroen.vangoey
    I tried my hand at this Google Codejam Africa problem (the contest is already finished, I just did it to improve my programming skills). The Problem: You are hosting a party with G guests and notice that there is an odd number of guests! When planning the party you deliberately invited only couples and gave each couple a unique number C on their invitation. You would like to single out whoever came alone by asking all of the guests for their invitation numbers. The Input: The first line of input gives the number of cases, N. N test cases follow. For each test case there will be: One line containing the value G the number of guests. One line containing a space-separated list of G integers. Each integer C indicates the invitation code of a guest. Output For each test case, output one line containing "Case #x: " followed by the number C of the guest who is alone. The Limits: 1 = N = 50 0 < C = 2147483647 Small dataset 3 = G < 100 Large dataset 3 = G < 1000 Sample Input: 3 3 1 2147483647 2147483647 5 3 4 7 4 3 5 2 10 2 10 5 Sample Output: Case #1: 1 Case #2: 7 Case #3: 5 This is the solution that I came up with: with open('A-large-practice.in') as f: lines = f.readlines() with open('A-large-practice.out', 'w') as output: N = int(lines[0]) for testcase, i in enumerate(range(1,2*N,2)): G = int(lines[i]) for guest in range(G): codes = map(int, lines[i+1].split(' ')) alone = (c for c in codes if codes.count(c)==1) output.write("Case #%d: %d\n" % (testcase+1, alone.next())) It runs in 12 seconds on my machine with the large input. Now, my question is, can this solution be improved in Python to run in a shorter time or use less memory? The analysis of the problem gives some pointers on how to do this in Java and C++ but I can't translate those solutions back to Python.

    Read the article

  • Difference dynami static 2d array c++

    - by snorlaks
    Hello, Im using opensource library called wxFreeChart to draw some XY charts. In example there is code which uses static array as a serie : double data1[][2] = { { 10, 20, }, { 13, 16, }, { 7, 30, }, { 15, 34, }, { 25, 4, }, }; dataset-AddSerie((double *) data1, WXSIZEOF(dynamicArray)); WXSIZEOF ismacro defined like: sizeof(array)/sizeof(array[0]) In this case everything works great but in my program Im using dynamic arrays (according to users input). I made a test and wrotecode like below: double **dynamicArray = NULL; dynamicArray = new double *[5] ; for( int i = 0 ; i < 5 ; i++ ) dynamicArray[i] = new double[2]; dynamicArray [0][0] = 10; dynamicArray [0][1] = 20; dynamicArray [1][0] = 13; dynamicArray [1][1] = 16; dynamicArray [2][0] = 7; dynamicArray [2][1] = 30; dynamicArray [3][0] = 15; dynamicArray [3][1] = 34; dynamicArray [4][0] = 25; dynamicArray [4][1] = 4; dataset-AddSerie((double *) *dynamicArray, WXSIZEOF(dynamicArray)); But it doesnt work correctly. I mean point arent drawn. I wonder if there is any possibility that I can "cheat" that method and give it dynamic array in way it understands it and will read data from correct place thanks for help

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • seriouosly elusive for loop (racking my brains!)

    - by user1693359
    I've got a loop issue in Python 2.72 that's really frustrating me. Basically the loop is not iterating fast the first index (j), and I've tried all sorts of ways to fix it with no luck. def learn(dataSet): for i in dataSet.getNext(): recall = raw_input("Enter all members of %s you are able to recall >>> (separated by commas) " % (i.getName())) missed = i.getMembers() missedString = [] for a in missed: missedString.append(a.getName()) Here is the loop I can't get to iterate. The first for loop only goes through the first iteration of 'j' in the split string list, then removes it from 'missedString'. I would like for all members of the split-string 'recall' to be removed from 'missedString'. for j in string.split(recall, ','): if j in missedString: missedString.remove(j) continue for b in missed: if b.getName() not in missedString: missed.remove(b) print 'You missed %d. ' % (len(missed)) if (len(missed)) > 0: print 'Maybe a hint or two will help...' for miss in missed: remind(miss.getSecs(), i.getName(), missed) I really have no clue, help would be appreciated!

    Read the article

  • How to compare date from database using C#?

    - by user1490374
    I would like to compare the date selected from the database (every entry in EndDate) and compare them with today date. Is there any way to do this programmatically? Like extracting the dates and comparing them individually? I need this because I need to update the status for the table. string username; username = HttpContext.Current.User.Identity.Name; string date = DateTime.Now.ToString("MM/dd/yyyy"); txtDate.Text = date; SqlConnection conn1 = new SqlConnection("Data Source=mydatasource\\sqlexpress;" + "Initial Catalog = Suite2; Integrated Security =SSPI"); SqlDataAdapter adapter; string end; end = "SELECT EndDate FROM Table_Message WHERE username = '" + username + "'"; adapter = new SqlDataAdapter(end, conn1); conn1.Open(); DataSet ds = new DataSet(); adapter.Fill(ds); //Execute the sql command GridView2.DataSource = ds; GridView2.DataBind(); conn1.Close();

    Read the article

  • Why does the MSDN library constantly say "Information Not Found"?

    - by Zian Choy
    Environment: VS 2008 Pro SP1 MSDN Library for VS2008SP1 Sample Code: Dim userDataset = New DataSet Dim myDataAdapter = New SqlDataAdapter("SELECT UserName FROM tblwebUsers WHERE name = @person", connect) myDataAdapter.SelectCommand.Parameters.Add("@person", SqlDbType.NVarChar) When I put my cursor on the "d" in "Add" and press F1, I get an "Information Not Found" error from the MSDN Library. Does anyone have any suggestions for addressing the issue?

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Object oriented n-tier design. Am I abstracting too much? Or not enough?

    - by max
    Hi guys, I'm building my first enterprise grade solution (at least I'm attempting to make it enterprise grade). I'm trying to follow best practice design patterns but am starting to worry that I might be going too far with abstraction. I'm trying to build my asp.net webforms (in C#) app as an n-tier application. I've created a Data Access Layer using an XSD strongly-typed dataset that interfaces with a SQL server backend. I access the DAL through some Business Layer Objects that I've created on a 1:1 basis to the datatables in the dataset (eg, a UsersBLL class for the Users datatable in the dataset). I'm doing checks inside the BLL to make sure that data passed to DAL is following the business rules of the application. That's all well and good. Where I'm getting stuck though is the point at which I connect the BLL to the presentation layer. For example, my UsersBLL class deals mostly with whole datatables, as it's interfacing with the DAL. Should I now create a separate "User" (Singular) class that maps out the properties of a single user, rather than multiple users? This way I don't have to do any searching through datatables in the presentation layer, as I could use the properties created in the User class. Or would it be better to somehow try to handle this inside the UsersBLL? Sorry if this sounds a little complicated... Below is the code from the UsersBLL: using System; using System.Data; using PedChallenge.DAL.PedDataSetTableAdapters; [System.ComponentModel.DataObject] public class UsersBLL { private UsersTableAdapter _UsersAdapter = null; protected UsersTableAdapter Adapter { get { if (_UsersAdapter == null) _UsersAdapter = new UsersTableAdapter(); return _UsersAdapter; } } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, true)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsers() { return Adapter.GetUsers(); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUserByUserID(int userID) { return Adapter.GetUserByUserID(userID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByTeamID(int teamID) { return Adapter.GetUsersByTeamID(teamID); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Select, false)] public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByEmail(string Email) { return Adapter.GetUserByEmail(Email); } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Insert, true)] public bool AddUser(int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { // Create a new UsersRow instance PedChallenge.DAL.PedDataSet.UsersDataTable Users = new PedChallenge.DAL.PedDataSet.UsersDataTable(); PedChallenge.DAL.PedDataSet.UsersRow user = Users.NewUsersRow(); if (UserExists(Users, Email) == true) return false; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Add the new user Users.AddUsersRow(user); int rowsAffected = Adapter.Update(Users); // Return true if precisely one row was inserted, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Update, true)] public bool UpdateUser(int userID, int? teamID, string FirstName, string LastName, string Email, string Role, int LocationID) { PedChallenge.DAL.PedDataSet.UsersDataTable Users = Adapter.GetUserByUserID(userID); if (Users.Count == 0) // no matching record found, return false return false; PedChallenge.DAL.PedDataSet.UsersRow user = Users[0]; if (teamID == null) user.SetTeamIDNull(); else user.TeamID = teamID.Value; user.FirstName = FirstName; user.LastName = LastName; user.Email = Email; user.Role = Role; user.LocationID = LocationID; // Update the product record int rowsAffected = Adapter.Update(user); // Return true if precisely one row was updated, // otherwise false return rowsAffected == 1; } [System.ComponentModel.DataObjectMethodAttribute (System.ComponentModel.DataObjectMethodType.Delete, true)] public bool DeleteUser(int userID) { int rowsAffected = Adapter.Delete(userID); // Return true if precisely one row was deleted, // otherwise false return rowsAffected == 1; } private bool UserExists(PedChallenge.DAL.PedDataSet.UsersDataTable users, string email) { // Check if user email already exists foreach (PedChallenge.DAL.PedDataSet.UsersRow userRow in users) { if (userRow.Email == email) return true; } return false; } } Some guidance in the right direction would be greatly appreciated!! Thanks all! Max

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • T-SQL Tuesday #006: Tiger/Line Spatial Data

    - by Mike C
    This month’s T-SQL Tuesday post is about LOB data http://sqlblog.com/blogs/michael_coles/archive/2010/05/03/t-sql-tuesday-006-what-about-blob.aspx . For this one I decided to post a sample Tiger/Line SQL database I use all the time in live demos. For those who aren't familiar with it, Tiger/Line data is a dataset published by the U.S. Census Bureau . Tiger/Line has a lot of nice detailed geospatial data down to a very detailed level. It actually goes from the U.S. state level all the way down to...(read more)

    Read the article

  • More on Visual Studio 11 from Scott Guthrie

    - by TATWORTH
    At http://weblogs.asp.net/scottgu/archive/2011/10/30/web-forms-model-binding-part-3-updating-and-validation-asp-net-4-5-series.aspx, Scott Guthrie talks about data binding is ASP.NET 4.5.There is a key statement "Because our GetProducts() method is returning an IQueryable<Product>, users can easily page and sort through the data within our GridView.  Only the 10 rows that are visible on any given page are returned from the database."Consider paging through a large dataset, this is going to give high performance with very little code as the database to IIS server traffic will be reduced.Can't code withoutThe best C# & VB.NET refactoring plugin for Visual Studio

    Read the article

  • Efficient way to sort large set of numbers

    - by 7Aces
    I have to sort a set of 100000 integers as a part of a programming Q. The time limit is pretty restrictive, so I have to use the most time-efficient approach possible. My current code - #include<cstdio> #include<algorithm> using namespace std; int main() { int n,d[100000],i; for(i=0;i<n;++i) { scanf("%d",&d[i]); } sort(d,d+n); .... } Would this approach be more efiicient? int main() { int n,d[100000],i; for(i=0;i<n;++i) { scanf("%d",&d[i]); sort(d,d+i+1); } .... } What is the most efficient way to sort a large dataset? Note - Not homework...

    Read the article

  • Shared Data Source name error underscore characters added

    - by mick
    The name of our shared data source in RS (report server) is "AF1 Live Database" (no underscore characters - just spaces between words) and is the same in report builder in VS. However, the following error pops up when the RDL of this report is uploaded onto our company site and run. (error we are receiving...) The report server cannot process the report or shared dataset. The shared data source 'AF1_Live_Database' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference) We have no idea why the error reports the shared data source as 'AF1_Live_Database' with underscore characters? As this appears to be the problem that keeps the report from running we are seeking your help, thanks.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >