Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 88/145 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Need a good website URL to test against

    - by Zombies
    I need a URL to just test basic http connectivity. It needs to be consistent and: Always be up Never change drastically due to IP or user agent. (IE: 301 Location redirect/ huge difference in content... minor would be tolerable) The URL itself has a consistent content-length. (IE: it doesn't vary from by 2kb at most, ever) A few examples, yet none match all 3 criteria: One example of always up: www.google.com (yet it 301 redirects based on IP location). Another good one is http://www.google.com/webhp?hl=en. but the problem there is that based on a given holiday, the content-length can really vary.

    Read the article

  • mysql select from multi tables problem

    - by moustafa
    this is the query SELECT members.memberID, members.salutation, members.firstName, members.middleName, members.lastName, members.suffix, members.company, addresses.address1, addresses.address2, addresses.city, addresses.state, addresses.postalCode, addresses.country, addresses.memberID, email.email, email.memberID, phonenumbers.phoneNumber, phonenumbers.memberId, subscriptions.year, subscriptions.memberID FROM members, addresses, email, phonenumbers, subscriptions WHERE subscriptions.year = '%s' AND subscriptions.memberID = members.memberID AND subscriptions.memberID = addresses.memberID AND subscriptions.memberID = email.memberID AND subscriptions.memberID = phonenumbers.memberID ORDER BY members.lastName, members.firstName, members.company LIMIT 0, 10 my problem is its a huge query so Im trying to limit it to so many at a time... its supposed to have over 5000 results... anyway the only limit that works is limit 0, 10 if you do anything else 5, 10 it doesnt work 0, 50 doesnt work... only 0, 10 works... and when I do 0, 10 the query returns blake firstName, middleName, lastName, and a few others... and when I do a print_r() on the $result it shows them blank as well and there is most def data in the database and there is also no typos for that...

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

  • What are the best practices for importing large datasets into MongoDB?

    - by snl
    We are just giving MongoDB a test run and have set up a Rails 3 app with Mongoid. What are the best practices for inserting large datasets into MongoDB? To flesh out a scenario: Say, I have a book model and want to import several million records from a CSV file. I suppose this needs to be done in the console, so this may possibly not be a Ruby-specific question. Edited to add: I assume it makes a huge difference whether the imported data includes associations or is supposed to go into one model only. Any comments on either scenario welcome.

    Read the article

  • mutliprocessing.Pool.add_sync() eating up memory

    - by Austin
    I want to use multithreading to make my script faster... I'm still new to this. The Python doc assumes you already understand threading and what-not. So... I have code that looks like this from itertools import izip from multiprocessing import Pool p = Pool() for i, j in izip(hugeseta, hugesetb): p.apply_async(number_crunching, (i, j)) Which gives me great speed! However, hugeseta and hugesetb are really huge. Pool keeps all of the _i_s and _j_s in memory after they've finished their job (basically, print output to stdout). Is there any to del i, and j after they complete?

    Read the article

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

  • Vector of objects

    - by Paul
    I've got a abstract class class A { public: virtual void somefunction() = ; }; and some different classes that inherit this class: class Ab { public: void somefunction(); }; etc. I want to make a vector containing some objects of these classes (how many depends on input parameters) so I can access these easily later. However I'm a bit lost on how to do this. My best idea is vector<A> *objectsVector; Ab AbObject; objectsVector.push_back(AbObject); However this gives me a huge amout of errors from various .h files in /usr/include/c++ How should i solve this?

    Read the article

  • What next generation low level language is the best bet to migrate the code base ?

    - by e-satis
    Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago. For now, C/C++ runs more than fine and there is plenty dev on the market for it. But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams. You have heard about D, starting to be quite mature, and Go, promising to be quite popular. What would be your choice and why?

    Read the article

  • Shorter GUIDs than hashing a user id?

    - by Alex Mcp
    I'm wondering how Instapaper (bookmarklet that saves text) might generate URLs for their bookmarklet. Mine has a script src of something similar to www.instapaper.com/j/AnJHrfoDTRia The quality of these URLs is that they need to never collide, and not be really guessable (so other people can't save to your account). I know a simple approach might be to MD5 their email address (presumed to have been checked on signup for uniqueness), but then I'd end up with a super long string. This isn't a huge issue, but I'm wondering what techniques there are for shorter GUIDs that won't collide too often (this is obviously the tradeoff, but 12 characters above is pretty short in my opinion)

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • Change all instances of object to iframe for IE using Jquery

    - by geckomist
    I was wondering if it would be possible to use jquery to change all object tags on a site automatically to iframe for IE 8 and below. I would like this so that it can be xhtml 1.1 valid and not have to be double coded all the time and you would not have to focus on non-standard browsers. The data attribute would have to be changed to src, I would like frameborder="0" to be inserted, and all styles set to the object tag also set to the iframe tag. I don't want this to turn into a debate on iframes vs objects, I just thing this would be a huge time saver and would encourage proper strict xhtml coding. Thanks for any input!

    Read the article

  • How do I make use of multiple cores in Large SQL Server Queries?

    - by Jonathan Beerhalter
    I have two SQL Servers, one for production, and one as an archive. Every night, we've got a SQL job that runs and copies the days production data over to the archive. As we've grown, this process takes longer and longer and longer. When I watch the utilization on the archive server running the archival process, I see that it only ever makes use of a single core. And since this box has eight cores, this is a huge waste of resources. The job runs at 3AM, so it's free to take any and all resources it can find. So what I need to do if figure out how to structure SQL Server jobs so they can take advantage of multiple cores, but I can't find any literature on tackling this problem. We're running SQL Server 2005, but I could certainly push for an upgrade if 2008 takes of this problem.

    Read the article

  • g++: Use ZIP files as input

    - by Notinlist
    We have the Boost library in our side. It consists of huge amount of files which are not changing ever and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows. It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like: // @ZIPFS ASSIGN 'boost' 'boost.zip/boost' #include <boost/smart_ptr/shared_ptr.hpp> Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • Best keyboards for emacs?

    - by catphive
    For emacs users out there, what are your recommended keyboards? Bonus points for keyboards that: Have no capslock key. Instead, a control key in that position. Alt keys that are closer to the center, and easier to use with meta key combos. I find alt keys too far to the left to be a bit awkward to hit with my thumb in some key combos. Help ergonomically with emacs in other ways. I'm not a huge fan of model M style high and clacky keys. I instead prefer laptop style flat keys; however, I'm not disqualifying either category.

    Read the article

  • Ideal way/architecture to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • How do I make my horizontally scrolling page work in IE?

    - by Jessica
    I am using a horizontal scroll for a page on my website ~ works great in FF but not in IE, how can i get it to work in IE? Here is the JS that I have in the head of my html: <script src="js/jquery-1.2.6.min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript" charset="utf-8"> $(function(){ $("#page-wrap").wrapInner("<table><tr>"); $(".post").wrap("<td>"); });</script> There is also of course the other script file, but its huge so I'm not going to post it here. Is there a hack or something I can use to get it to scroll correctly in IE? Thx in advance!

    Read the article

  • PHP displaying error for already used Username and empty field

    - by Pixel Reaper
    I want PHP to make sure the username is not already used and also check to see if the field is empty. Sorry I am a huge noob when it comes to php. Here is my code: // Check for an Username: $dup = mysql_query("SELECT user_username FROM users WHERE user_username='".$_POST['user_username']."'"); if(mysql_num_rows($dup) >0){ $errors[] = 'Username already used.'; } else{ $un = mysqli_real_escape_string($dbc, trim($_POST['user_username'])); echo '<b>Congrats, You are now Registered.</b>'; } else { $errors[] = 'You forgot to enter your Username.'; }

    Read the article

  • Large strings: Text files or SQL DB?

    - by Tommo
    I am coding a forum system using PHP. I am currently storing a threads ID, title, author, views and other attributes in an SQL database and then storing the thread body (the HTML and BBcode) in text files inside a folder named after the thread ID. In practise it's really simple to grab the database values then just grab the thread body from the text file, but I was wondering if this is the 'proper way'? I have personally no problems doing this but if it turns out it is massively inefficient and I should instead store both the thread body HTML and BBcode in the database instead then I will change. However, to me it seems wrong to store such a (very possibly) huge string of multi-line text along with lots of different characters in a database - I was taught that databases are more for short field 'values' rather than website content. I would just like a definitive answer to this because it's been bugging me for ages as to wherever I’ve been doing it properly. Does anyone know how popular forum systems store threads?

    Read the article

  • Java combine parents of two large inheritance chains

    - by Soylent Green
    I have two parent classes in a huge project, let's say ClassA and ClassB. Each class has many subclasses, which in turn have many subclasses, which in turn have many subclasses, etc. My task is to "marry" these two "families" so that both inherit from a SINGLE parent. I need to essentially make ClassA and ClassB one class (parent) to both of their combined subclasses (children). ClassA and ClassB both currently implement Serializable. I am currently trying to make both inheritance chains inherit from ClassA, and then copy all functions and data members from ClassB into ClassA. This is tedious, and I think a terrible solution. What would be the CORRECT way to solve this problem?

    Read the article

  • Powershell - remote folder availability while counting files

    - by ziklop
    I´m trying to make a Powershell script that reports if there´s a file older than x minutes on remote folder. I make this: $strfolder = 'folder1 ..................' $pocet = (Get-ChildItem \\server1\edi1\folder1\*.* ) | where-object {($_.LastWriteTime -lt (Get-Date).AddDays(-0).AddHours(-0).AddMinutes(-20))} | Measure-Object if($pocet.count -eq 0){Write-Host $strfolder "OK" -foreground Green} else {Write-Host $strfolder "ERROR" -foreground Red} But there´s one huge problem. The folder is often unavailable for me bacause of the high load and I found out when there is no connection it doesn´t report an error but continues with zero in $pocet.count. It means it reports everything is ok when the folder is unavailable. I was thinking about using if(Test-Path..) but what about it became unavailable just after passing Test-Path? Does anyone has a solution please? Thank you in advance

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Creating immutable objects from javabean

    - by redzedi
    Hi All, I am involved in this project where we are building on good bit of legacy code. I have a particular situation about one big java bean object which has to be transferred over wire. So my first thought was to make it immutable and serializable to do the trick .At this point I am faced with a few difficult choices :- 1 Ideally I want some way to automatically generate an immutable, serializable version of this class. I dont have the scope to refactor or alter this class in any way and i would really really hate to have to copy paste the class with a different name ?? 2 Assuming that i gave up on 1 i.e i actually chose to duplicate code of the HUGE javabean class , i still will be in the unsavoury situation of having to write a constructor with some 20-25 parameters to make this class immutable. what is a better way to make a class immutable other than constructor injection ?? Thanks and Regards,

    Read the article

  • insert into table where if not in list

    - by jim smith
    Can anybody help me with the syntax? insert into history (company,partnumber,price) values ('blah','IFS0090','0.00') if company NOT IN ('blah','blah2','blah3','blah4','blah4') and partnumber='IFS0090'; Background: I have a history table which stores daily company, products and prices. But sometimes a company will remove itself for a few days. Complicating the issue is because I'm only saving daily CHANGES to prices only and not snapshotting the entire days list (the data would be huge) when I display the data the company will still come up for the previous days price. So I need to do something like this, where a 0.00 price means they're no longer there.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >