Search Results

Search found 6399 results on 256 pages for 'record'.

Page 223/256 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Selecting random top 3 listings per shop for a range of active advertising shops

    - by GraGra33
    I’m trying to display a list of shops each with 3 random items from their shop, if they have 3 or more listings, that are actively advertising. I have 3 tables: one for the shops – “Shops”, one for the listings – “Listings” and one that tracks active advertisers – “AdShops”. Using the below statement, the listings returned are random however I’m not getting exactly 3 listings (rows) returned per shop. SELECT AdShops.ID, Shops.url, Shops.image_url, Shops.user_name AS shop_name, Shops.title, L.listing_id AS listing_id, L.title AS listing_title, L.price as price, L.image_url AS listing_image_url, L.url AS listing_url FROM AdShops INNER JOIN Shops ON AdShops.user_id = Shops.user_id INNER JOIN Listings AS L ON Shops.user_id = L.user_id WHERE (Shops.is_vacation = 0 AND Shops.listing_count > 2 AND L.listing_id IN (SELECT TOP 3 L2.listing_id FROM Listings AS L2 WHERE L2.listing_id IN (SELECT TOP 100 PERCENT L3.listing_id FROM Listings AS L3 WHERE (L3.user_id = L.user_id) ) ORDER BY NEWID() ) ) ORDER BY Shops.shop_name I’m stumped. Anyone have any ideas on how to fix it? The ideal solution would be one record per store with the 3 listings (and associated data) were in columns and not rows – is this possible?

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • Where does java.util.logging.Logger store their log

    - by Harry Pham
    This might be a stupid question but I am a bit lost with java Logger private static Logger logger = Logger.getLogger("order.web.OrderManager"); logger.info("Removed order " + id + "."); Where do I see the log? Also this quote from java.util.logging.Logger library: On each logging call the Logger initially performs a cheap check of the request level (e.g. SEVERE or FINE) against the effective log level of the logger. If the request level is lower than the log level, the logging call returns immediately. After passing this initial (cheap) test, the Logger will allocate a LogRecord to describe the logging message. It will then call a Filter (if present) to do a more detailed check on whether the record should be published. If that passes it will then publish the LogRecord to its output Handlers. Does this mean that if I have 3 request level log: logger.log(Level.FINE, "Something"); logger.log(Level.WARNING, "Something"); logger.log(Level.SEVERE, "Something"); And my log level is SEVERE, I can see all three logs, if my log level is WARNING, then I cant see SEVERE log, is that correct? And how do I set the log level?

    Read the article

  • Is there an efficient way in LINQ to use a contains match if and only if there is no exact match?

    - by Peter
    I have an application where I am taking a large number of 'product names' input by a user and retrieving some information about each product. The problem is, the user may input a partial name or even a wrong name, so I want to return the closest matches for further selection. Essentially if product name A exactly matches a record, return that, otherwise return any contains matches. Otherwise return null. I have done this with three separate statements, and I was wondering if there was a more efficient way to do this. I am using LINQ to EF, but I materialize the products to a list first for performance reasons. productNames is a List of product names (input by the user). products is a List of product 'records' var directMatches = (from s in productNames join p in products on s.ToLower() equals p.name.ToLower() into result from r in result.DefaultIfEmpty() select new {Key = s, Product = r}); var containsMatches = (from d in directMatches from p in products where d.Product == null && p.name.ToLower().Contains(d.Key) select new { d.Key, Product = p }); var matches = from d in directMatches join c in containsMatches on d.Key equals c.Key into result from r in result.DefaultIfEmpty() select new {d.Key, Product = d.Product ?? (r != null ? r.Product: null) };

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • design pattern for related inputs

    - by curiousMo
    My question is a design question : let's say i have a data entry web page with 4 drop down lists, each depending on the previous one, and a bunch of text boxes. ddlCountry (DropDownList) ddlState (DropDownList) ddlCity (DropDownList) ddlBoro (DropDownList) txtAddress (TxtBox) txtZipcode(TxtBox) and an object that represents a datarow with a value for each: countrySeqid stateSeqid citySeqid boroSeqid address zipCode naturally the country, state, city and boro values will be values of primary keys of some lookup tables. when the user chooses to edits that record, i would load it from database and load it into the page. the issue that I have is how to streamline loading the DropDownLists. i have some code that would grab the object,look thru its values and move them to their corresponding input controls in one shot. but in this case i will have to load the ddlCountry with possible values, then assign values, then do the same thing for the rest of the ddls. I guess i am looking for an elegant solution. i am using asp.net, but i think it is irrelevant to the question. i am looking more into a design pattern.

    Read the article

  • overwrite parameters passed by querystring

    - by opensas
    I have the following problem I have a web framework built with classic asp that saves the page state in hidden textboxes, and then issues a submit to itself. Before submitting, we have a javascript functions that saves the action in a hidden "action" input, and then performs the submit. The page loads the state from those hidden texts, reads the action issued, reads extra parameters, like the id of the record to edit, and then builds the page accordingly. I'd like to make a url link to automatically start the page with "edit" action on a "x" id. So I was thinking about building the following url, for example http://myapp/user?action=edit&id=23 the problem is that when the page auto-submits, que url string keeps the parameters. I'd like to achieve the following: when the user clicks on http://myapp/user?action=edit&id=23 my page should receive the posted values action=edit and id=23 but the url should be just http://myapp/user and both parameters should be kept in the hidden texts... (I wonder if I make myself clear...) thanks a lot saludos sas ps: I have a couple of ideas about how to solve it, but I'll post them as answers...

    Read the article

  • PHP and MySQL echoing out a Table

    - by user1631702
    Okay, so I've done this before, and it worked. I am trying to echo out specific rows on my database in a table. Here is my code: <?php $connect = mysql_connect("localhost", "xxx", "xxx") or die ("Hey loser, check your server connection."); mysql_select_db("xxx"); $quey1="select * from `Ad Requests`"; $result=mysql_query($quey1) or die(mysql_error()); ?> <table border=1 style="background-color:#F0F8FF;" > <caption><EM>Student Record</EM></caption> <tr> <th>Student ID</th> <th>Student Name</th> <th>Class</th> </tr> <?php while($row=mysql_fetch_array($result)){ echo "</td><td>"; echo $row['id']; echo "</td><td>"; echo $row['twitter']; echo "</td><td>"; echo $row['why']; echo "</td></tr>"; } echo "</table>"; ?> It gives me no errors, but It just shows a blank table with none of these rows. My Question: How come this wont show any rows in the table, what am I doing wrong?

    Read the article

  • Google App-Engine Java Batch Update

    - by Manjoor
    I need to upload a .csv file and save the records in bigtable. My application successfully parse 200 the records in the csv files and save to table. Here is my code to save the data. for (int i=0;i<lines.length -1;i++) //lines hold total records in csv file { String line = lines[i]; //The record have 3 columns integer,integer,Text if(line.length() > 15) { int n = line.indexOf(","); if (n>0) { int ID = lInteger.parseInt(ine.substring(0,n)); int n1 = line.indexOf(",", n + 2); if(n1 > n) { int Col1 = Integer.parseInt(line.substring(n + 1, n1)); String Col2 = line.substring(n1 + 1); myTable uu = new myTable(); uu.setId(ID); uu.setCol1(MobNo); Text t = new Text(Col2); uu.setCol2(t); PersistenceManager pm = PMF.get().getPersistenceManager(); pm.makePersistent(uu); pm.close(); } } } } But when no of records grow it gives timeout error. The csv file may have upto 800 records. Is it possible to do that in App-Engine? (something like batch update)

    Read the article

  • Magento products will not show in category

    - by Aaron
    I've recently been tasked with the build and deployment of a large Ecommerce site. In the past we've had to use the clients legacy X-cart installation for redevelopment (too far integrated with their existing work flow). We'd heard good things about Magento, so I've set up a test install to get to grips with it. After a couple of initial issues, there is a live development site which displays categories on the default theme. The problem we've hit now is that products don't display..! After a lot more in-depth research into this, all I've been able to discover is that quite a number of developers endorse using other solutions entirely, with the other 50% saying after the steep learning curve the platform is as wonderful as we'd initially been led to believe. Now, my test category is showing, so I know this is configured properly. I've set up three test products and associated them with this (all done following the Magento user guide), checked double checked and thrice checked the products are enabled and visible individually, yet still the front end says the category has no products in it. I've cleared the cache repeatedly, reset everything possible many times in index management - no products show up. I have to make a call tomorrow morning on whether we're going ahead with Magento. If I can't even get it to show products I'm going to have to go with something with a more established track record and more community support available. Can anybody advise what could possibly be wrong here?

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • setMessage for Zend_Validate_EmailAddress doesn't work

    - by iSenne
    Hello everybody. I have a form and I want to set my custom errors in it. I am using Zend, and I have the following code... //Create validators $formMustBeEmail = new Zend_Validate_EmailAddress(); $formMustBeEmail->setMessage(array( Zend_Validate_EmailAddress::INVALID => "1. Invalid type given, value should be a string", Zend_Validate_EmailAddress::INVALID_FORMAT => "2. '%value%' is no valid email address in the basic format local-part@hostname", Zend_Validate_EmailAddress::INVALID_HOSTNAME => "3. '%hostname%' is no valid hostname for email address '%value%'", Zend_Validate_EmailAddress::INVALID_MX_RECORD => "4. '%hostname%' does not appear to have a valid MX record for the email address '%value%'", Zend_Validate_EmailAddress::INVALID_SEGMENT => "5. '%hostname%' is not in a routable network segment. The email address '%value%' should not be resolved from public network.", Zend_Validate_EmailAddress::DOT_ATOM => "6. '%localPart%' can not be matched against dot-atom format", Zend_Validate_EmailAddress::QUOTED_STRING => "7. '%localPart%' can not be matched against quoted-string format", Zend_Validate_EmailAddress::INVALID_LOCAL_PART => "8. '%localPart%' is no valid local part for email address '%value%'", Zend_Validate_EmailAddress::LENGTH_EXCEEDED => "9. '%value%' exceeds the allowed length", Then I make the form... $this->addElement('text', 'email'); $emailElement = $this->getElement('email'); $emailElement ->setLabel('Emailadres') ->setOrder(1) ->setRequired(true) ->addValidator($formMustBeTest) ->addValidator($formMustBeEmail) ->addFilter(new Zend_Filter_StripTags()); But it doesn't work. I still get the normal errors made by Zend. Can anyone see what I am doing wrong? Tnx in advanced...

    Read the article

  • Maven + SSDM Build and Runtime Environment Automation

    - by Randy
    Preface: My Company, like most, has several run-time environments and several release versions which themselves are composed of different versions of various jars. For example, let us consider release versions 1.1, 1.2, and 1.3 of Software X, which may be deployed to a developer computer, testing, or production. Software-x-1.1 is itself composed of jarA-0.9.1 and jarB-0.7.5, but software-x-1.3 is composed of jarA-1.7.31 and jarB-0.8.1. Currently we use Spring's PropertyPlaceholderConfigurer to configure run-time variables (such as database credentials), however, properties also change with release versions. We also use Maven 2 POM version 4 to specify which versions of our code need to be used. We place the version numbers of our jars as properties within profiles (dev,test,prod) inside of the parent pom and then reference those version numbers in all project poms. As of right now, we have no way to specify which project versions pertain to a given release other than the most current one. Moreover, we deploy our run-time configurations to the SSDM pickup which then configures and creates the services defined by the built versions of our software. -- Questions: Is there any procedure/tool we can use to build our product by merely providing the run-time environment and version number? IE "build 1.1 dev"? Is there anyway we can store the required jar versions for each release build? We are currently versioning all files, including the parent pom, but merely versioning the parent pom does not record which release version is pertinent to that parent pom. What else can we do to further automate the process of builds? For example, if we could manage run-time configurations within the parent pom that would be a step in the right direction, but that seems like a violation of scope. Any tool outside of our framework is inconceivable at this point, but not in the far future. Summary: How can we automate our build process to the fullest extent without being error prone?

    Read the article

  • rails 4 -- working with js format from ajax

    - by user101289
    I'm still working on learning Rails, and I have a page with team information that will get updated based on a team's icon click, which fires an ajax call to the controller to populate some tabs. I've read some good info about how to use format.js in the controller to render a partial from a js.coffee or js.erb file. The problem I'm running into is in the coffeescript I think. Right now, I'm getting some data called @schedules from the controller, and passing it to a schedule.js.coffee file that should populate a partial for each record returned and attach it to a table. // schedule.js.coffee $.each @schedules, (schedule) -> ($ '#schedule_data').append("<%= j render(partial: 'schedules/schedule', locals: { s: schedule }) %>") This throws an error `> undefined local variable or method `schedule' for #<#<Class:0x007fe535cd2900>:0x007fe535d32a30>` I tried simplifying the coffeescript to just log the output: $.each @schedules, (schedule) -> console.log(schedule) but this prints nothing. Am I missing something? I am very inexperienced with coffeescript, but it seems like I should be getting some data-- I verified that the schedule items do exist for this team item.

    Read the article

  • Incrementing value by one over a lot of rows

    - by Andy Gee
    Edit: I think the answer to my question lies in the ability to set user defined variables in MySQL through PHP - the answer by Multifarious has pointed me in this direction Currently I have a script to cycle over 10M records, it's very slow and it goes like this: I first get a block of 1000 results in an array similar to this: $matches[] = array('quality_rank'=>46732, 'db_id'=>5532); $matches[] = array('quality_rank'=>12324, 'db_id'=>1234); $matches[] = array('quality_rank'=>45235, 'db_id'=>8345); $matches[] = array('quality_rank'=>75543, 'db_id'=>2562); I then cycle through them one by one and update the record $mult = count($matches)*2; foreach($matches as $m) { $rank++; $score = (($m[quality_rank] + $rank)/($mult))*100; $s = "UPDATE `packages_sorted` SET `price_rank` = '".$rank."', `deal_score` = '".$score."' WHERE `db_id` = '".$m[db_id]."' LIMIT 1"; } It seems like this is a very slow way of doing it but I can't find another way to increment the field price_rank by one each time. Can anyone suggest a better method. Note: Although I wouldn't usually store this kind of value in a database I really do need on this occasion for comparison search queries later on in the project. Any help would be kindly appreciated :)

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • What's the best way to "shuffle" a table of database records?

    - by Darth
    Say that I have a table with a bunch of records, which I want to randomly present to users. I also want users to be able to paginate back and forth, so I have to perserve some sort of order, at least for a while. The application is basically only AJAX and it uses cache for already visited pages, so even if I always served random results, when the user tries to go back, he will get the previous page, because it will load from the local cache. The problem is, that if I return only random results, there might be some duplicates. Each page contains 6 results, so to prevent this, I'd have to do something like WHERE id NOT IN (1,2,3,4 ...) where I'd put all the previously loaded IDs. Huge downside of that solution is that it won't be possible to cache anything on the server side, as every user will request different data. Alternate solution might be to create another column for ordering the records, and shuffle it every insert time unit here. The problem here is, I'd need to set random number out of a sequence to every record in the table, which would take as many queries as there are records. I'm using Rails and MySQL if that's of any relevance.

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • Designing a chain of states

    - by devoured elysium
    I want to model a kind of FSM(Finite State Machine). I have a sequence of states (let's say, from StateA to StateZ). This sequence is called a Chain and is implemented internally as a List. I will add states by the order I want them to run. My purpose is to be able to make a sequence of actions in my computer (for example, mouse clicks). (I know this has been done a zillion times). So a state is defined as a: boolean Precondition() <- Checks to see if for this state, some condition is true. For example, if I want to click in the Record button of a program, in this method I would check if the program's process is running or not. If it is, go to the next state in the chain list, otherwise, go to what was defined as the fail state (generally is the first state of them all). IState GetNextState() <- Returns the next state to evaluate. If Precondition() was sucessful, it should yield the next state in the chain otherwise it should yield the fail state. Run() Simply checks the Precondition() and sets the internal data so GetNextState() works as expected. So, a naive approach to this would be something like this: Chain chain = new Chain(); //chain.AddState(new State(Precondition, FailState, NextState) <- Method structure chain.AddState(new State(new WinampIsOpenCondition(), null, new <problem here, I want to referr to a state that still wasn't defined!>); The big problem is that I want to make a reference to a State that at this point still wasn't defined. I could circumvent the problem by using strings when refrering to states and using an internal hashtable, but isn't there a clearer alternative? I could just pass only the pre-condition and failure states in the constructor, having the chain just before execution put in each state the correct next state in a public property but that seems kind of awkward.

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • Check the code n point the mistake

    - by Vibha
    here is the code: Ext.onReady(function(){ alert("inside onReady"); Ext.QuickTips.init(); var employee = Ext.data.Record.create([ {name:'firstname'}, {name:'lastname'}]); var myReader = new Ext.data.JsonReader({ root:"EmpInfo", },employee); var store = new Ext.data.JsonStore({ id:'ID' ,root:'EmpInfo' ,totalProperty:'totalCount' ,url:'test.php' ,autoLoad:true ,fields:[ {name:'firstname', type:'string'} ,{name:'lastname', type:'string'} ] }); var myPanel = new Ext.grid.GridPanel({ store: store ,columns:[{ dataIndex:'firstname' ,header:'First Name' ,width:139 },{ dataIndex:'lastname' ,header:'Middle Name' ,width:139 } ] }); var myWindow = new Ext.Window({ width:300, height:300, layout:'fit', closable:false, resizable:false, items:[myPanel] }); myWindow.show(); }); And php code is: true, "data" = array( "firstname" = "ABC" , "lastname" = "MNO") ); $_SESSION["err"] = isset($_SESSION["err"]) ? !$_SESSION["err"] : true; header("Content-Type: application/json"); echo json_encode($o); ? I want to print the values ABC and MNO in the grid panel. i'm using extjs 2.3. please help me out. Thanks

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >