Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 652/704 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • C++ CRTP(template pattern) question

    - by aaa
    following piece of code does not compile, the problem is in T::rank not be inaccessible (I think) or uninitialized in parent template. Can you tell me exactly what the problem is? is passing rank explicitly the only way? or is there a way to query tensor class directly? Thank you #include <boost/utility/enable_if.hpp> template<class T, // size_t N, class enable = void> struct tensor_operator; // template<class T, size_t N> template<class T> struct tensor_operator<T, typename boost::enable_if_c< T::rank == 4>::type > { tensor_operator(T &tensor) : tensor_(tensor) {} T& operator()(int i,int j,int k,int l) { return tensor_.layout.element_at(i, j, k, l); } T &tensor_; }; template<size_t N, typename T = double> // struct tensor : tensor_operator<tensor<N,T>, N> { struct tensor : tensor_operator<tensor<N,T> > { static const size_t rank = N; }; I know the workaround, however am interested in mechanics of template instantiation for self-education

    Read the article

  • Append data to same text file

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Customer_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String Query1="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; String Query2="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; try { Statement stmt = null; ResultSet rs = null; Statement stmt1 = null; ResultSet rs1 = null; stmt = conn.createStatement(); stmt1 = conn.createStatement(); rs = stmt.executeQuery(Query1); rs1 = stmt1.executeQuery(Query2); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } The above code is working fine. it writes the content of "rs" resultset data in text file Now what i want is ,i need to append the the content in "rs2" resultset to the "same text file"(ie . i need to append "rs2" content with "rs" content in the same text file)..

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Linq-to-XML explicit casting in a generic method

    - by vlad
    I've looked for a similar question, but the only one that was close didn't help me in the end. I have an XML file that looks like this: <Fields> <Field name="abc" value="2011-01-01" /> <Field name="xyz" value="" /> <Field name="tuv" value="123.456" /> </Fields> I'm trying to use Linq-to-XML to get the values from these fields. The values can be of type Decimal, DateTime, String and Int32. I was able to get the fields one by one using a relatively simple query. For example, I'm getting the 'value' from the field with the name 'abc' using the following: private DateTime GetValueFromAttribute(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == "abc" select (DateTime)field.Attribute("value")).FirstOrDefault() } this is placed in a separate function that simply returns this value, and everything works fine (since I know that there is only one element with the name attribute set to 'abc'). however, since I have to do this for decimals and integers and dates, I was wondering if I can make a generic function that works in all cases. this is where I got stuck. here's what I have so far: private T GetValueFromAttribute<T>(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == attName select (T)field.Attribute("value").Value).FirstOrDefault(); } this doesn't compile because it doesn't know how to convert from String to T. I tried boxing and unboxing (i.e. select (T) (Object) field.Attribute("value").Value but that throws a runtime Specified cast is not valid exception as it's trying to convert the String to a DateTime, for instance. Is this possible in a generic function? can I put a constraint on the generic function to make it work? or do I have to have separate functions to take advantage of Linq-to-XML's explicit cast operators?

    Read the article

  • Bad idea to have the same object, have a different side effect after method call.

    - by Nathan W
    Hi all, I'm having a bit of a gesign issue(again). Say I have this Buttonpad object: now this object is a wrapper object over one in a com object. At the moment it has a method on it called CreateInto(IComObject). Now to make a new button pad in the Com Object. You do: ButtonPad pad = new ButtonPad(); pad.Title = "Hello"; // Set some more properties. pad.CreateInto(Cominstance); The createinfo method will excute the right commands to buid the button pad in the com object. After it has been created it any calls against it are foward to the underlying object for change so: pad.Title = "New title"; will call the com object to set the title and also set the internal title variable. Basically any calls before the CreateInfo method only affect the .NET object anything after has the side effect of calling the com object also. I'm not very good at sequence diagrams but here is my attempt to explain whats going on: This doesn't feel good to me, it feels like I'm lying to the user about what the button pad does. I was going to have a object called WrappedButtonPad, which is returned from CreateInto and the user could make calls against that to make changes to the Com Object, but I feel having two objects that almost do the same thing but only differ by names might be even worse. Are these valid designs, or am I right to be worried? How else would you handle a object the can create and query a com object?

    Read the article

  • Average over a timeframe with missing data

    - by BHare
    Assuming a table such as: UID Name Datetime Users 4 Room 4 2012-08-03 14:00:00 3 2 Room 2 2012-08-03 14:00:00 3 3 Room 3 2012-08-03 14:00:00 1 1 Room 1 2012-08-03 14:00:00 2 3 Room 3 2012-08-03 14:15:00 1 2 Room 2 2012-08-03 14:15:00 4 1 Room 1 2012-08-03 14:15:00 3 1 Room 1 2012-08-03 14:30:00 6 1 Room 1 2012-08-03 14:45:00 3 2 Room 2 2012-08-03 14:45:00 7 3 Room 3 2012-08-03 14:45:00 8 4 Room 4 2012-08-03 14:45:00 4 I wanted to get the average user count of each room (1,2,3,4) from the time 2PM to 3PM. The problem is that sometimes the room may not "check in" at the 15 minute interval time, so the assumption has to be made that the previous last known user count is still valid. For example the check-in's for 2012-08-03 14:15:00 room 4 never checked in, so it must be assumed that room 4 had 3 users at 2012-08-03 14:15:00 because that is what it had at 2012-08-03 14:00:00 This follows on through so that the average user count I am looking for is as follows: Room 1: (2 + 3 + 6 + 3) / 4 = 3.5 Room 2: (3 + 4 + 4 + 7) / 4 = 4.5 Room 3: (1 + 1 + 1 + 8) / 4 = 2.75 Room 4: (3 + 3 + 3 + 4) / 4 = 3.25 where # is the assumed number based on the previous known check-in. I am wondering if it's possible to so this with SQL alone? if not I am curious of a ingenious PHP solution that isn't just bruteforce math, as such as my quick inaccurate pseudo code: foreach ($rooms_id_array as $room_id) { $SQL = "SELECT * FROM `table` WHERE (`UID` == $room_id && `Datetime` >= 2012-08-03 14:00:00 && `Datetime` <= 2012-08-03 15:00:00)"; $result = query($SQL); if ( count($result) < 4 ) { // go through each date and find what is missing, and then go to previous date and use that instead } else { foreach ($result) $sum += $result; $avg = $sum / 4; } }

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • Silverlight 4 webclient authentication - anyone have this working yet?

    - by Toran Billups
    So one of the best parts about the new Silverlight 4 beta is that they finally implemented the big missing feature of the networking stack - Network Credentials! In the below I have a working request setup, but for some reason I get a "security error" when the request comes back - is this because twitter.com rejected my api call or something that I'm missing in code? It might be good to point out that when I watch this code execute via fiddler it shows that the xml file for cross domain is pulled down successfully, but that is the last request shown by fiddler ... public void RequestTimelineFromTwitterAPI() { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential("username", "password"); myService.UseDefaultCredentials = false; myService.OpenReadCompleted += new OpenReadCompletedEventHandler(TimelineRequestCompleted); myService.OpenReadAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } public void TimelineRequestCompleted(object sender, System.Net.OpenReadCompletedEventArgs e) { //anytime I query for e.Result I get a security error }

    Read the article

  • logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • Runtime Error 1004 using Select with several workbooks

    - by Johaen
    I have an Excel workbook which pulls out data from two other workbooks. Since the data changes hourly there is the possibility that this macro is used more than one time a day for the same data. So I just want to select all previous data to this date period and want to delete them. Later on the data will be copied in anyway. But as soon as I want to use WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select the code stopes with Error 1004 Application-defined or object-defined error. Followed just a snippet of the code with the relevant part. What is wrong here? 'Set source workbook Dim currentWb As Workbook Set currentWb = ThisWorkbook Set WBSH = currentWb.Sheets("Tracking") 'Query which data from the tracking files shoud get pulled out to the file CheckDate = Application.InputBox(("From which date you want to get data?" & vbCrLf & "Format: yyyy/mm/dd "), "Tracking data", Format(Date - 1, "yyyy/mm/dd")) 'states the last entry which is done ; know where to start ; currentWb File With currentWb.Sheets("Tracking") lastRow = .Range("D" & .Rows.Count).End(xlUp).Row lastRow = lastRow + 1 End With 'just last 250 entries get checked since not so many entries are made in one week j = lastRow - 250 'Check if there is already data to the look up date in the analyses sheet and if so deletes these records Do j = j + 1 'Exit Sub if there is no data to compare to prevent overflow If WBSH.Cells(j + 1, "C").Value = "" Then Exit Do End If Loop While WBSH.Cells(j, "C").Value < CheckDate If j <> lastRow - 1 Then 'WBSH.Range(Cells(j, "A"), Cells(lastRow - 1, "M")).Select 'Selection.ClearContents End If Thank you!

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • Stored procedure for generic MERGE

    - by GilliVilla
    I have a set of 10 tables in a database (DB1). And there are 10 tables in another database (DB2) with exact same schema on the same SQL Server 2008 R2 database server machine. The 10 tables in DB1 are frequently updated with data. I intend to write a stored procedure that would run once every day for synchronizing the 10 tables in DB1 with DB2. The stored procedure would make use of the MERGE statement. Now, my aim is to make this as generic and parametrized as possible. That is, accommodate for more tables down the line... and accommodate different source and target DB names. Definitely no hard coding is intended. This is my algorithm so far: Have the database names as parameters Have the first query within the stored procedure... result in giving the names of the 10 tables from a lookup table (this can be 10, 20 or whatever) Have a generic MERGE statement that does the sync for each of the above set of tables (based on primary key?) This is where I need more inputs on. What is the best way to achieve this stored procedure? SQL syntax would be helpful.

    Read the article

  • Join with Three Tables

    - by John
    Hello, In MySQL, I am using the following three tables (their fields are listed after their titles): comment: commentid loginid submissionid comment datecommented login: loginid username password email actcode disabled activated created points submission: submissionid loginid title url displayurl datesubmitted I would like to display "datecommented" and "comment" for a given "username," where "username" equals a variable called $profile. I would also like to make the "comment" a hyperlink to http://www...com/.../comments/index.php?submission='.$rowc["title"].'&submissionid='.$rowc["submissionid"].'&url='.$rowc["url"].'&countcomments='.$rowc["countComments"].'&submittor='.$rowc["username"].'&submissiondate='.$rowc["datesubmitted"].'&dispurl='.$rowc["displayurl"].' where countComments equals COUNT(c.commentid) and $rowc is part of the query listed below. I tried using the code below to do what I want, but it didn't work. How could I change it to make it do what I want? Thanks in advance, John $sqlStrc = "SELECT s.loginid ,s.title ,s.url ,s.displayurl ,s.datesubmitted ,l.username ,l.loginid ,s.title ,s.submissionid ,c.comment ,c.datecommented ,COUNT(c.commentid) countComments FROM submission s INNER JOIN login l ON s.loginid = l.loginid LEFT OUTER JOIN comment c ON c.loginid = l.loginid WHERE l.username = '$profile' GROUP BY c.loginid ORDER BY s.datecommented DESC LIMIT 10"; $resultc = mysql_query($sqlStrc); $arrc = array(); echo "<table class=\"samplesrec1c\">"; while ($rowc = mysql_fetch_array($resultc)) { $dtc = new DateTime($rowc["datecommented"], $tzFromc); $dtc->setTimezone($tzToc); echo '<tr>'; echo '<td class="sitename3c">'.$dtc->format('F j, Y &\nb\sp &\nb\sp g:i a').'</a></td>'; echo '<td class="sitename1c"><a href="http://www...com/.../comments/index.php?submission='.$rowc["title"].'&submissionid='.$rowc["submissionid"].'&url='.$rowc["url"].'&countcomments='.$rowc["countComments"].'&submittor='.$rowc["username"].'&submissiondate='.$rowc["datesubmitted"].'&dispurl='.$rowc["displayurl"].'">'.stripslashes($rowc["comment"]).'</a></td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Need help/suggestions for creating fantasy sports scoring databases and queries

    - by MGumbel
    I'm trying to create a website for my friends and I to keep track of fantasy sports scoring. So far, I've been doing the calculations and storage in Excel, which is very tedious. I'm trying to make it more simplified and automated through a SQL database that I can then wrap a web app around to enter daily stat updates. It's premised on our participation in another commercial site where we trade virtual shares of athletes, and thus acquire an "ownership percentage" in each athlete. For instance, if there are 100 shares of AROD, and I own 10 shares, then I own 10%. It then applies this to traditional baseball rotisserie scoring. So, for instance, if AROD has 1 HR today, then his adjusted HR stat would be 1.10. If he also has 2 RBI's, then his adjusted RBI stat today would be 2.20, based on (2 x 1.10)(1 to normalize the stat, and the .10 to represent the ownership percentage). All the stats for my team would then be summed each day and added to my stat history to come to an aggregated total. After that, points are allocated based on the ranking of each participant in each category at the end of the day. E.g. if there are 10 participants, and I have the highest total aggregate number of Adjusted HR's, then I get 10 pts. The points are then summed across the different stat categories to come up with a total point ranking for that day. An added difficulty is that ownership %'s can change on a daily basis. So far, in playing around with different schema, I don't know that having a separate table for each athlete's stats and each player's ownership %'s is the wisest choice. It seems to me that simply having two tables, one that contains the daily stat information for each athlete, and another that shows the ownership % of each player. My friend suggested using a start and end date for each ownership % to represent the potential daily changes in this category. I'm admittedly new to database development, so any suggestions on query code would be appreciated.

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

  • Slow Response in checkbox using JQuery

    - by Dean
    Hi to all. I'm trying to optimize a website. The flow is i tried to query a certain table and all of its data entry in a page(w/ toolbars), work fine. When i tried to edit the page the problem is when i click the checkbox button i have to wait 2-5sec just by clicking it. I limit the viewing of entry to 5 only but the response on checkbox doesn't change. The tables have 100 entries in them. function checkAccess(celDiv,id) { var celValue = $(celDiv).html(); if (celValue==1) $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' checked disabled>") else $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' disabled>") $(celDiv).click ( function() { $('input',this).each( function(){ tr_idx = $('#detFlex1 tbody tr').index($(this).parent().parent().parent()); td_idx = $('#detFlex1 tbody tr:eq('+tr_idx+') td').index($(this).parent().parent()); td_last = 13; for(var td=td_idx+1; td<=td_last;td++) { if ($(this).attr('checked') == true) { df[0].rows[tr_idx].cell[td_idx] = 1;//index[1] = Full Access if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 1; } df[0].rows[tr_idx].cell[2] = 1; if (td_idx > 3) { df[0].rows[tr_idx].cell[2] = 1; } } else { df[0].rows[tr_idx].cell[td_idx] = 0;//index[0] = With Access if (td_idx==2) { df[0].rows[tr_idx].cell[td] = 0; } else if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 0; } if (td_idx > 3) { df[0].rows[tr_idx].cell[3] = 0; } } } $('#detFlex1').flexAddData(df[0]); $('.toolbar a[title=Edit Item]').trigger('click'); }); } ); } I've thought that the problem is this above code. Could anyone help me simplify this code.?

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >