Search Results

Search found 12439 results on 498 pages for 'wondering'.

Page 448/498 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Executing logic before save or validation with EF Code-First Models

    - by Ryan Norbauer
    I'm still getting accustomed to EF Code First, having spent years working with the Ruby ORM, ActiveRecord. ActiveRecord used to have all sorts of callbacks like before_validation and before_save, where it was possible to modify the object before it would be sent off to the data layer. I am wondering if there is an equivalent technique in EF Code First object modeling. I know how to set object members at the time of instantiation, of course, (to set default values and so forth) but sometimes you need to intervene at different moments in the object lifecycle. To use a slightly contrived example, say I have a join table linking Authors and Plays, represented with a corresponding Authoring object: public class Authoring { public int ID { get; set; } [Required] public int Position { get; set; } [Required] public virtual Play Play { get; set; } [Required] public virtual Author Author { get; set; } } where Position represents a zero-indexed ordering of the Authors associated to a given Play. (You might have a single "South Pacific" Play with two authors: a "Rodgers" author with a Position 0 and a "Hammerstein" author with a Position 1.) Let's say I wanted to create a method that, before saving away an Authoring record, it checked to see if there were any existing authors for the Play to which it was associated. If no, it set the Position to 0. If yes, it would find set the Position of the highest value associated with that Play and increment by one. Where would I implement such logic within an EF code first model layer? And, in other cases, what if I wanted to massage data in code before it is checked for validation errors? Basically, I'm looking for an equivalent to the Rails lifecycle hooks mentioned above, or some way to fake it at least. :)

    Read the article

  • Selecting first instance of class but not nested instances via jQuery

    - by DA
    Given the following hypothetical markup: <ul class="monkey"> <li> <p class="horse"></p> <p class="cow"></p> </li> </ul> <dl class="monkey"> <dt class="horse"></dt> <dd class="cow"> <dl> <dt></dt> <dd></dd> </dl> <dl class="monkey"> <dt class="horse"></dt> <dd class="cow"></dd> </dl> </dd> </dl> I want to be able to grab the 'first level' of horse and cow classes within each monkey class. But I don't want the NESTED horse and cow classes. I started with .children, but that won't work with the UL example as they aren't direct children of .monkey. I can use find: $('.monkey').find('.horse, .cow') but that returns all instances, including the nested ones. I can filter the find: $('.monkey').find('.horse, .cow').not('.cow .horse, .cow .cow') but that prevents me from selecting nested instances on a second function call. So...I guess what I'm looking for is 'find first "level" of this descendant'. I could likely do this with some looping logic, but was wondering if there is a selector and/or some combo of selectors that would achieve that logic.

    Read the article

  • wmd editor, why does it keep showing html instead of just going straight to markup

    - by Ke
    hi, im wondering how wmd is supposed to work, when i type in the textarea the text doesnt have html, but once the text is stored in db it turns to html. wmd also shows all this html when reloading the content? is it supposed to work like this? Do I have to sanitize the text before its put into the db? if so how? I thought wmd doesnt deal with html? except in code blocks. Also there are p tags being added Using the beneath html it gets added directly. I guess this could cause xss attacks? - (1) <a onmouseover="alert(1)" href="#">read this!</a> - (2) <p <script>alert(1)</script>hello - (3) </td <script>alert(1)</script>hello I wonder how is wmd supposed to work? I thought it was supposed to enter everything in its own mark up, store its on mark up and retrieve it etc. instead of storing plain html Chees Ke

    Read the article

  • MVC3 custom validation attribute for an "at least one is required" situation

    - by Pricey
    Hi I have found this answer already: MVC3 Validation - Require One From Group Which is fairly specific to the checking of group names and uses reflection. My example is probably a bit simpler and I was just wondering if there was a simpler way to do it. I have the below: public class TimeInMinutesViewModel { private const short MINUTES_OR_SECONDS_MULTIPLIER = 60; //public string Label { get; set; } [Range(0,24, ErrorMessage = "Hours should be from 0 to 24")] public short Hours { get; set; } [Range(0,59, ErrorMessage = "Minutes should be from 0 to 59")] public short Minutes { get; set; } /// <summary> /// /// </summary> /// <returns></returns> public short TimeInMinutes() { // total minutes should not be negative if (Hours <= 0 && Minutes <= 0) { return 0; } // multiplier operater treats the right hand side as an int not a short int // so I am casting the result to a short even though both properties are already short int return (short)((Hours * MINUTES_OR_SECONDS_MULTIPLIER) + (Minutes * MINUTES_OR_SECONDS_MULTIPLIER)); } } I want to add a validation attribute either to the Hours & Minutes properties or the class itself.. and the idea is to just make sure at least 1 of these properties (Hours OR minutes) has a value, server and client side validation using a custom validation attribute. Does anyone have an example of this please? Thanks

    Read the article

  • Transaction to find an entity - locks all entities of that type?

    - by user246114
    Hi, Reading the docs for transactions: http://code.google.com/appengine/docs/java/datastore/transactions.html An example provided shows one way to make an instance of an object: try { tx.begin(); Key k = KeyFactory.createKey("SalesAccount", id); try { account = pm.getObjectById(Employee.class, k); } catch (JDOObjectNotFoundException e) { account = new SalesAccount(); account.setId(id); } ... When the above transaction gets executed, it will probably block all other write attempts on Account objects? I'm wondering because I'd like to have a user signup which checks for a username or email already in use: tx.begin(); "select from User where mUsername == str1 LIMIT 1"; if (count > 0) { throw new Exception("username already in use!"); } "select from User where mEmail == str1 LIMIT 1"; if (count > 0) { throw new Exception("email already in use!"); } pm.makePersistent(user(username, email)); // ok. tx.commit(); but the above would be even more time consuming I think, making an even worse bottleneck? Am I understanding what will happen correctly? Thanks

    Read the article

  • Recursion in prepared statements

    - by Rob
    I've been using PDO and preparing all my statements primarily for security reasons. However, I have a part of my code that does execute the same statement many times with different parameters, and I thought this would be where the prepared statements really shine. But they actually break the code... The basic logic of the code is this. function someFunction($something) { global $pdo; $array = array(); static $handle = null; if (!$handle) { $handle = $pdo->prepare("A STATEMENT WITH :a_param"); } $handle->bindValue(":a_param", $something); if ($handle->execute()) { while ($row = $handle->fetch()) { $array[] = someFunction($row['blah']); } } return $array; } It looked fine to me, but it was missing out a lot of rows. Eventually I realised that the statement handle was being changed (executed with different param), which means the call to fetch in the while loop will only ever work once, then the function calls itself again, and the result set is changed. So I am wondering what's the best way of using PDO prepared statements in a recursive way. One way could be to use fetchAll(), but it says in the manual that has a substantial overhead. The whole point of this is to make it more efficient. The other thing I could do is not reuse a static handle, and instead make a new one every time. I believe that since the query string is the same, internally the MySQL driver will be using a prepared statement anyway, so there is just the small overhead of creating a new handle on each recursive call. Personally I think that defeats the point. Or is there some way of rewriting this?

    Read the article

  • XAttribute Generating strange namespaces

    - by Adam Driscoll
    I'm constructing an XElement with a couple attributes that have different namespaces. The code looks like this: var element = new XElement("SynchronousCommand", new XAttribute("{wcm}action", "add"), new XAttribute("{ns}id", Guid.NewGuid()), new XElement... ); The XML that is generated looks like this: <unattend xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:schemas-microsoft-com:unattend"> <SynchronousCommand d5p1:action="add" d5p2:id="c0f5fc6d-d407-4d3d-8a05-d84236cca2fb" xmlns:d5p2="ns" xmlns:d5p1="wcm"> ... </SynchronousCommand> </unattend> I'm just wondering if the auto-generated d5p2 is valid and why it is doing this. According to the XML standards here it seems like it would be valid. But why is it not: <unattend xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:schemas-microsoft-com:unattend"> <SynchronousCommand wcm:action="add" ns:id="c0f5fc6d-d407-4d3d-8a05-d84236cca2fb" > To generate the XML I'm doing this: public class unattend { public List<XElement> Any {get;} } var unattend = new unattend(); unattend.Add(element); serializer.Serialize(xmlWriter, unattend);

    Read the article

  • Rails: Getting rid of generic "X is invalid" validation errors

    - by DJTripleThreat
    I have a sign-up form that has nested associations/attributes whatever you want to call them. My Hierarchy is this: class User < ActiveRecord::Base acts_as_authentic belongs_to :user_role, :polymorphic => true end class Customer < ActiveRecord::Base has_one :user, :as => :user_role, :dependent => :destroy accepts_nested_attributes_for :user, :allow_destroy => true validates_associated :user end class Employee < ActiveRecord::Base has_one :user, :as => :user_role, :dependent => :destroy accepts_nested_attributes_for :user, :allow_destroy => true validates_associated :user end I have some validation stuff in these classes as well. My problem is that if I try to create and Customer (or Employee etc) with a blank form I get all of the validation errors I should get plus some Generic ones like "User is invalid" and "Customer is invalid" If I iterate through the errors I get something like: user.login can't be blank User is invalid customer.whatever is blah blah blah...etc customer.some_other_error etc etc Since there is at least one invalid field in the nested User model, an extra "X is invalid" message is added to the list of errors. This gets confusing to my client and so I'm wondering if there is a quick way to do this instead of having to filer through the errors myself.

    Read the article

  • Application design question, best approach?

    - by Jamie Keeling
    Hello, I am in the process of designing an application that will allow you to find pictures (screen shots) made from certain programs. I will provide the locations of a few of the program in the application itself to get the user started. I was wondering how I should go about adding new locations as the time goes on, my first thought was simply hard coding it into the application but this will mean the user has to reinstall it to make the changes take affect. My second idea was to use an XML file to contain all the locations as well as other data, such as the name of the application. This also means the user can add their own locations if they wish as well as sharing them over the internet. The second option seemed the best approach but then I had to think how would it be managed on the users computer. Ideally I'd like just a single .exe without the reliance on any external files such as the XML but this would bring me back to point one. Would it be best to simply use the ClickOnce deployment to create an entry in the start menu and create a folder containing the .exe and the file names? Thanks for the feedback, I don't want to start implementing the application until the design is nailed.

    Read the article

  • Clarification of atomic memory access for different OSs

    - by murrekatt
    I'm currently porting a Windows C++ library to MacOS as a hobby project as a learning experience. I stumbled across some code using the Win Interlocked* functions and thus I've been trying to read up on the subject in general. Reading related questions here in SO, I understand there are different ways to do these operations depending on the OS. Interlocked* in Windows, OSAtomic* in MacOS and I also found that compilers have builtin (intrinsic) operations for this. After reading gcc builtin atomic memory access, I'm left wondering what is the difference between intrinsic and the OSAtomic* or Interlocked* ones? I mean, can I not choose between OSAtomic* or gcc builtin if I'm on MacOS when I use gcc? The same if I'd be on Windows using gcc. I also read that on Windows Interlocked* come as both inline and intrinsic versions. What to consider when choosing between intrinsic or inline? In general, are there multiple options on OSs what to use? Or is this again "it depends"? If so, what does it depend on? Thanks!

    Read the article

  • multi-shop orders table and sequential order numbers based on shop

    - by imanc
    Hey, I am looking at building a shop solution that needs to be scalable. Currently it retrieves 1-2000 orders on average per day across multiple country based shops (e.g. uk, us, de, dk, es etc.) but this order could be 10x this amount in two years. I am looking at either using separate country-shop databases to store the orders tables, or looking to combine all into one order table. If all orders exist in one table with a global ID (auto num) and country ID (e.g uk,de,dk etc.), each countries orders would also need to have sequential ordering. So in essence, we'd have to have a global ID and a country order ID, with the country order ID being sequential for countries only, e.g. global ID = 1000, country = UK, country order ID = 1000 global ID = 1001, country = DE, country order ID = 1000 global ID = 1002, country = DE, country order ID = 1001 global ID = 1003, country = DE, country order ID = 1002 global ID = 1004, country = UK, country order ID = 1001 THe global ID would be DB generated and not something I would need to worry about. But I am thinking that I'd have to do a query to get the current country order based ID+1 to find the next sequential number. Two things concern me about this: 1) query times when the table has potentially millions of rows of data and I'm doing a read before a write, 2) the potential for ID number clashes due to simultaneous writes/reads. With a MyISAM table the entire table could be locked whilst the last country order + 1 is retrieved, to prevent ID number clashes. I am wondering if anyone knows of a more elegant solution? Cheers, imanc

    Read the article

  • How to encrypt/decrypt a file in Java?

    - by Petike
    Hello, I am writing a Java application which can "encrypt" and consequently "decrypt" whatever binary file. I am just a beginner in the "cryptography" area so I would like to write a very simple application for the beginning. For reading the original file, I would probably use the java.io.FileInputStream class to get the "array of bytes" byte originalBytes[] of the file. Then I would probably use some very simple cipher, for example "shift up every byte by 1" and then I would get the "encrypted" bytes byte encryptedBytes[] and let's say that I would also set a "password" for it, for example "123456789". Next, when somebody wants to "decrypt" that file, he has to enter the password ("123456789") first and after that the file could be decrypted (thus "shift down every byte by 1") and consequently saved to the output file via java.io.FileOutputStream I am just wondering how to "store" the password information to the encrypted file so that the decrypting application knows if the entered password and the "real" password equals? Probably it would be silly to add the password (for example the ASCII ordinal numbers of the password letters) to the beginning of the file (before the encrypted data). So my main question is how to store the password information to the encrypted file?

    Read the article

  • Using maven to distribute a swing application that can have each dependency individually tracked

    - by tms
    I'm moving my project to Maven and eventually OSGi. I currently distribute the project is a large Zip file with all the dependencies. Although my projects code is only 20% of the total package I have to redistribute all the dependency. With smaller independent modules this may be even less. Looking here on stack overflow it seems that to keep my current plan the maven-assembly-plugin should do the trick. I was considering having a base installer that would look at a XML manifest, then collect all the libraries that needed to be updated. This would mean that libraries that change occasionally would be downloaded less often. This also makes since for something like OSGi plugins (which could have independent release schedules). In essence I want my software to look and manage individual libraries, and download on demand (based on the manifest). I was wondering if there is a "maven way" of generating this Manifest and publishing all the libraries to a website? I believe the deploy life-cycle would do the second step. As an alternative, is there a OpenSource Java library that does this type of deployment? I don't want to embed Maven or something larger with the distributed code. The application is not for coders, the simpler the better, and the smaller the installer the better.

    Read the article

  • jQuery Accordian

    - by Fuego DeBassi
    Just wondering if anyone can provide some basic advice on an accordion I'm trying to simplify. Got a working version, but it seems way overly complex. Here is my new JS. $(document).ready(function() { $("#themes li ul").hide(); $("#themes li").hover(function() { $("ul").show(); }, function() { $("li ul").hide(); }); The markup looks like this: <ul> <li>Tier 1 <ul> <li>Tier 2</li> <li>Tier 2</li> </ul> </li> <li>Tier 1 <ul> <li>Tier 2</li> <li>Tier 2</li> </ul> </li> </ul> My script works alright. But it shows all of the child ul's when any parent li is hovered, and it hide's all the child ul's when unhovered. Just not sure how I can get it to A.) Only .show the li ul when that specific li is hovered. And B.) Hide the show'n li ul only when another one is hovered (not itself). Example + explanation would be especially helpful! Thanks!!

    Read the article

  • Function pointers in Objective-C

    - by Stefan Klumpp
    I have the following scenario: Class_A - method_U - method_V - method_X - method_Y Class_B - method_M - method_N HttpClass - startRequest - didReceiveResponse // is a callback Now I want to realize these three flows (actually there are many more, but these are enough to demonstrate my question): Class_A :: method_X -> HttpClass :: startRequest:params -> ... wait, wait, wait ... -> HttpClass :: didReceiveResponse -> Class_A :: method_Y:result and: Class_A :: method_U -> HttpClass :: startRequest:params -> ... wait, wait, wait ... -> HttpClass :: didReceiveResponse -> Class_A :: method_V:result and the last one: Class_B :: method_M -> HttpClass :: startRequest:params -> ... wait, wait, wait ... -> HttpClass :: didReceiveResponse -> Class_B :: method_N:result Please note, that the methods in Class_A and Class_B have different names and functionality, they just make us of the same HttpClass. My solution now would be to pass a C function pointer to startRequest, store it in the HttpClass and when didReceiveResponse gets called I invoke the function pointer and pass the result (which will always be a JSON Dictionary). Now I'm wondering if there can be any problems using plain C or if there are better solutions doing it in a more Objective-C way. Any ideas?

    Read the article

  • How to output multicolumn html without "widows"?

    - by user314850
    I need to output to HTML a list of categorized links in exactly three columns of text. They must be displayed similar to columns in a newspaper or magazine. So, for example, if there are 20 lines total the first and second columns would contain 7 lines and the last column would contain 6. The list must be dynamic; it will be regularly changed. The tricky part is that the links are categorized with a title and this title cannot be a "widow". If you have a page layout background you'll know that this means the titles cannot be displayed at the bottom of the column -- they must have at least one link underneath them, otherwise they should bump to the next column (I know, technically it should be two lines if I were actually doing page layout, but in this case one is acceptable). I'm having a difficult time figuring out how to get this done. Here's an example of what I mean: Shopping Link 3 Link1 Link 1 Link 4 Link2 Link 2 Link 3 Link 3 Cars Link 1 Music Games Link 2 Link 1 Link 1 Link 2 News As you can see, the "News" title is at the bottom of the middle column, and so is a "widow". This is unacceptable. I could bump it to the next column, but that would create an unnecessarily large amount of white space at the bottom of the second column. What needs to happen instead is that the entire list needs to be re-balanced. I'm wondering if anyone has any tips for how to accomplish this, or perhaps source code or a plug in. Python is preferable, but any language is fine. I'm just trying to get the general concept down.

    Read the article

  • Do variable references (alias) incure runtime costs in c++?

    - by cheshirekow
    Maybe this is a compiler specific thing. If so, how about for gcc (g++)? If you use a variable reference/alias like this: int x = 5; int& y = x; y += 10; Does it actually require more cycles than if we didn't use the reference. int x = 5; x += 10; In other words, does the machine code change, or does the "alias" happen only at the compiler level? This may seem like a dumb question, but I am curious. Especially in the case where maybe it would be convenient to temporarily rename some member variables just so that the math code is a little easier to read. Sure, we're not exactly talking about a bottleneck here... but it's something that I'm doing and so I'm just wondering if there is any 'actual' difference... or if it's only cosmetic.

    Read the article

  • Terminate function on System.in .. possible?

    - by Ronald
    I am currently working on a project where I have to make an agent to interact with a server. Each 50ms, the server will receive the last thing I outputted to System.out and send me a new set of lines as a 'state' through the System.in printstream to analyze and send my next message to System.out. Also, if the server receives multiple outputs from me, it only regards the most recent one. .. As for my question: My program originally constructed a tree and then analyzed each leaf node to see which would be optimal, and then waited around for the next input, but I can recursively do a deeper tree search that would make my output 'better' (and again and again to keep returning a better result). Using this and the fact that if the server receives multiple outputs, it only takes the most recent one, I could do each level, print my result and start the next level. But here comes my problem... I can't be stuck in some complex algorithm while I am supposed to receiving the next input as I will then miss it. So I was wondering if there is a way to cancel anything else I am doing when I receive something via System.in and then go back to the beginning of the function and start the search again with the new set of input (and rinse and repeat..) I hope this all makes sense, Thank ye all

    Read the article

  • c++ Design pattern for CoW, inherited classes, and variable shared data?

    - by krunk
    I've designed a copy-on-write base class. The class holds the default set of data needed by all children in a shared data model/CoW model. The derived classes also have data that only pertains to them, but should be CoW between other instances of that derived class. I'm looking for a clean way to implement this. If I had a base class FooInterface with shared data FooDataPrivate and a derived object FooDerived. I could create a FooDerivedDataPrivate. The underlying data structure would not effect the exposed getters/setters API, so it's not about how a user interfaces with the objects. I'm just wondering if this is a typical MO for such cases or if there's a better/cleaner way? What peeks my interest, is I see the potential of inheritance between the the private data classes. E.g. FooDerivedDataPrivate : public FooDataPrivate, but I'm not seeing a way to take advantage of that polymorphism in my derived classes. class FooDataPrivate { public: Ref ref; // atomic reference counting object int a; int b; int c; }; class FooInterface { public: // constructors and such // .... // methods are implemented to be copy on write. void setA(int val); void setB(int val); void setC(int val); // copy constructors, destructors, etc. all CoW friendly private: FooDataPrivate *data; }; class FooDerived : public FooInterface { public: FooDerived() : FooInterface() {} private: // need more shared data for FooDerived // this is the ???, how is this best done cleanly? };

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

  • Confused about UIView frame property

    - by slowfungus
    I'm building a prototype iPad app that draws diagrams. I have the following view hierarchy: UIView UIScrollView DiagramView : UIView TabBar NavigationBar And a UIViewController subclass holding all that together. Before drawing the diagram the first time I calculate the dimensions of the diagram, and set the DiagramView frame to that size, and the content size of the scrollview as well. -(void)recalculateBounds { [renderer diagram:diagram shouldDraw:NO]; SQXRect diagramRect = SQXMakeRect(0.0,0.0,[diagram bounds].size.width,[diagram bounds].size.height); self.frame = diagramRect; [(UIScrollView*)[self superview] setContentSize:diagramRect.size]; } I should disclose that the frame is being set to about 1500 x 3500 which i know is ridiculous. I just want to focus on some other parts of the app before I get into optimizing the render code. This works beautifully, except that the rect being passed to drawRect is not the size that I set, and my drawing is getting clipped at the bottom. Its close the size i set, but bigger in width, and shorter in height. Also of note, is the fact that if I force the frame to be much bigger than what I know the diagram needs, then the drawRect:rect is big enough, and no clipping occurs. Of course this has me wondering if the frame size needs to take into account some other screen real estate like the toolbars but my reading of the docs tells me the frame is in superview coordinates, which would be the scrollview so I reckon I need to worry about such things. Any idea what is causing this discrepancy?

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Simple way of getting the Last.fm artist image for recently listened songs?

    - by animuson
    On the Last.fm website, your recently listened track include the 34x34 (or whatever size) image at the left of each song. However, in the RSS feed that they give you, no image URLs are provided for the songs. I was wondering if there was a good way of figuring out the ID for the image that needs to be used for that artist and displaying it based on the data that we're given. I know it is possible to load the artist page from their website and then grab the image values from JavaScript, but that seems overly complicated and would probably take quite some time to do. What we're given: <item> <title>Owl City – Rainbow Veins</title> <link>http://www.last.fm/music/Owl+City/_/Rainbow+Veins</link> <pubDate>Thu, 20 May 2010 18:15:29 +0000</pubDate> <guid>http://www.last.fm/user/animuson#1274379329</guid> <description>http://www.last.fm/music/Owl+City</description> </item> and the 34x34 image for this song would be here (ID# 37056785). Does anything like this exist? I've considered storing the ID number in a cache of some sort once it has been checked once, but what if the image changes?

    Read the article

  • List of Big-O for PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >